Test Report: KVM_Linux_crio 17485

                    
                      8dc642b39e51c59087e6696ac1afe8c1c527ee77:2023-10-24:31589
                    
                

Test fail (27/292)

Order failed test Duration
28 TestAddons/parallel/Ingress 154.8
29 TestAddons/parallel/InspektorGadget 8.71
41 TestAddons/StoppedEnableDisable 155.54
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.01
205 TestMultiNode/serial/PingHostFrom2Pods 3.22
211 TestMultiNode/serial/RestartKeepsNodes 689.34
213 TestMultiNode/serial/StopMultiNode 143.54
220 TestPreload 185.14
226 TestRunningBinaryUpgrade 168.47
235 TestStoppedBinaryUpgrade/Upgrade 287.9
262 TestPause/serial/SecondStartNoReconfiguration 57.63
275 TestStartStop/group/no-preload/serial/Stop 140.12
277 TestStartStop/group/embed-certs/serial/Stop 140.21
280 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.36
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
282 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/old-k8s-version/serial/Stop 139.61
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
290 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.24
293 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.3
294 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.22
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.19
296 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 363.45
297 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 479.33
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 244.23
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 153.85
x
+
TestAddons/parallel/Ingress (154.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-866342 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-866342 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:231: (dbg) Done: kubectl --context addons-866342 replace --force -f testdata/nginx-ingress-v1.yaml: (1.024568277s)
addons_test.go:244: (dbg) Run:  kubectl --context addons-866342 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f5245aa2-39c0-4f7b-917a-28296885d357] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f5245aa2-39c0-4f7b-917a-28296885d357] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.01282339s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-866342 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.378964837s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-866342 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.163
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-866342 addons disable ingress-dns --alsologtostderr -v=1: (1.552300049s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-866342 addons disable ingress --alsologtostderr -v=1: (8.034614778s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-866342 -n addons-866342
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-866342 logs -n 25: (1.389911496s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | -p download-only-645515                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| delete  | -p download-only-645515                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| delete  | -p download-only-645515                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-397693 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | binary-mirror-397693                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36043                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-397693                                                                     | binary-mirror-397693 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-866342 --wait=true                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:03 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-866342 ssh cat                                                                       | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | /opt/local-path-provisioner/pvc-36d1a6de-39d6-4c81-a7f0-3bf4da62b74d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:04 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-866342 ip                                                                            | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-866342 addons                                                                        | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC |                     |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | -p addons-866342                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-866342 ssh curl -s                                                                   | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:04 UTC | 24 Oct 23 19:04 UTC |
	|         | -p addons-866342                                                                            |                      |         |         |                     |                     |
	| addons  | addons-866342 addons                                                                        | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:04 UTC | 24 Oct 23 19:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-866342 addons                                                                        | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:04 UTC | 24 Oct 23 19:04 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-866342 ip                                                                            | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:06 UTC | 24 Oct 23 19:06 UTC |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:06 UTC | 24 Oct 23 19:06 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:06 UTC | 24 Oct 23 19:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:53.618092   16652 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:53.618354   16652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:53.618365   16652 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:53.618369   16652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:53.618537   16652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:00:53.619115   16652 out.go:303] Setting JSON to false
	I1024 19:00:53.619927   16652 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2352,"bootTime":1698171702,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:53.619984   16652 start.go:138] virtualization: kvm guest
	I1024 19:00:53.622328   16652 out.go:177] * [addons-866342] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:53.624022   16652 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:00:53.625545   16652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:53.623959   16652 notify.go:220] Checking for updates...
	I1024 19:00:53.628430   16652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:00:53.629887   16652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:53.631239   16652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:00:53.632603   16652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:00:53.634132   16652 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:00:53.664670   16652 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 19:00:53.666142   16652 start.go:298] selected driver: kvm2
	I1024 19:00:53.666155   16652 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:00:53.666165   16652 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:00:53.666855   16652 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:53.666945   16652 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:00:53.680707   16652 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:00:53.680755   16652 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:00:53.680967   16652 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:00:53.681031   16652 cni.go:84] Creating CNI manager for ""
	I1024 19:00:53.681047   16652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:00:53.681061   16652 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 19:00:53.681070   16652 start_flags.go:323] config:
	{Name:addons-866342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:53.681186   16652 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:53.683088   16652 out.go:177] * Starting control plane node addons-866342 in cluster addons-866342
	I1024 19:00:53.684529   16652 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:00:53.684566   16652 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:53.684576   16652 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:53.684641   16652 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:00:53.684652   16652 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:00:53.684936   16652 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/config.json ...
	I1024 19:00:53.684955   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/config.json: {Name:mk3628ed1574a5393dd97070b77f0feb57c98277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:00:53.685083   16652 start.go:365] acquiring machines lock for addons-866342: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:00:53.685126   16652 start.go:369] acquired machines lock for "addons-866342" in 28.474µs
	I1024 19:00:53.685146   16652 start.go:93] Provisioning new machine with config: &{Name:addons-866342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:00:53.685211   16652 start.go:125] createHost starting for "" (driver="kvm2")
	I1024 19:00:53.686816   16652 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1024 19:00:53.686909   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:00:53.686944   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:00:53.700021   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1024 19:00:53.700425   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:00:53.700948   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:00:53.700969   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:00:53.701280   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:00:53.701467   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:00:53.701617   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:00:53.701768   16652 start.go:159] libmachine.API.Create for "addons-866342" (driver="kvm2")
	I1024 19:00:53.701796   16652 client.go:168] LocalClient.Create starting
	I1024 19:00:53.701825   16652 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem
	I1024 19:00:53.961535   16652 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem
	I1024 19:00:54.441995   16652 main.go:141] libmachine: Running pre-create checks...
	I1024 19:00:54.442018   16652 main.go:141] libmachine: (addons-866342) Calling .PreCreateCheck
	I1024 19:00:54.442525   16652 main.go:141] libmachine: (addons-866342) Calling .GetConfigRaw
	I1024 19:00:54.442976   16652 main.go:141] libmachine: Creating machine...
	I1024 19:00:54.442991   16652 main.go:141] libmachine: (addons-866342) Calling .Create
	I1024 19:00:54.443152   16652 main.go:141] libmachine: (addons-866342) Creating KVM machine...
	I1024 19:00:54.444488   16652 main.go:141] libmachine: (addons-866342) DBG | found existing default KVM network
	I1024 19:00:54.445245   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.445040   16674 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1024 19:00:54.450829   16652 main.go:141] libmachine: (addons-866342) DBG | trying to create private KVM network mk-addons-866342 192.168.39.0/24...
	I1024 19:00:54.516239   16652 main.go:141] libmachine: (addons-866342) DBG | private KVM network mk-addons-866342 192.168.39.0/24 created
	I1024 19:00:54.516283   16652 main.go:141] libmachine: (addons-866342) Setting up store path in /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342 ...
	I1024 19:00:54.516306   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.516238   16674 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:54.516334   16652 main.go:141] libmachine: (addons-866342) Building disk image from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:00:54.516369   16652 main.go:141] libmachine: (addons-866342) Downloading /home/jenkins/minikube-integration/17485-9023/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso...
	I1024 19:00:54.732029   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.731909   16674 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa...
	I1024 19:00:54.806829   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.806710   16674 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/addons-866342.rawdisk...
	I1024 19:00:54.806866   16652 main.go:141] libmachine: (addons-866342) DBG | Writing magic tar header
	I1024 19:00:54.806881   16652 main.go:141] libmachine: (addons-866342) DBG | Writing SSH key tar header
	I1024 19:00:54.806903   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.806822   16674 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342 ...
	I1024 19:00:54.806997   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342
	I1024 19:00:54.807044   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342 (perms=drwx------)
	I1024 19:00:54.807073   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines
	I1024 19:00:54.807093   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines (perms=drwxr-xr-x)
	I1024 19:00:54.807124   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube (perms=drwxr-xr-x)
	I1024 19:00:54.807135   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023 (perms=drwxrwxr-x)
	I1024 19:00:54.807143   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1024 19:00:54.807158   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1024 19:00:54.807175   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:54.807191   16652 main.go:141] libmachine: (addons-866342) Creating domain...
	I1024 19:00:54.807205   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023
	I1024 19:00:54.807218   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1024 19:00:54.807226   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins
	I1024 19:00:54.807238   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home
	I1024 19:00:54.807252   16652 main.go:141] libmachine: (addons-866342) DBG | Skipping /home - not owner
	I1024 19:00:54.808239   16652 main.go:141] libmachine: (addons-866342) define libvirt domain using xml: 
	I1024 19:00:54.808261   16652 main.go:141] libmachine: (addons-866342) <domain type='kvm'>
	I1024 19:00:54.808273   16652 main.go:141] libmachine: (addons-866342)   <name>addons-866342</name>
	I1024 19:00:54.808284   16652 main.go:141] libmachine: (addons-866342)   <memory unit='MiB'>4000</memory>
	I1024 19:00:54.808305   16652 main.go:141] libmachine: (addons-866342)   <vcpu>2</vcpu>
	I1024 19:00:54.808325   16652 main.go:141] libmachine: (addons-866342)   <features>
	I1024 19:00:54.808340   16652 main.go:141] libmachine: (addons-866342)     <acpi/>
	I1024 19:00:54.808353   16652 main.go:141] libmachine: (addons-866342)     <apic/>
	I1024 19:00:54.808378   16652 main.go:141] libmachine: (addons-866342)     <pae/>
	I1024 19:00:54.808410   16652 main.go:141] libmachine: (addons-866342)     
	I1024 19:00:54.808427   16652 main.go:141] libmachine: (addons-866342)   </features>
	I1024 19:00:54.808446   16652 main.go:141] libmachine: (addons-866342)   <cpu mode='host-passthrough'>
	I1024 19:00:54.808463   16652 main.go:141] libmachine: (addons-866342)   
	I1024 19:00:54.808472   16652 main.go:141] libmachine: (addons-866342)   </cpu>
	I1024 19:00:54.808511   16652 main.go:141] libmachine: (addons-866342)   <os>
	I1024 19:00:54.808536   16652 main.go:141] libmachine: (addons-866342)     <type>hvm</type>
	I1024 19:00:54.808544   16652 main.go:141] libmachine: (addons-866342)     <boot dev='cdrom'/>
	I1024 19:00:54.808553   16652 main.go:141] libmachine: (addons-866342)     <boot dev='hd'/>
	I1024 19:00:54.808560   16652 main.go:141] libmachine: (addons-866342)     <bootmenu enable='no'/>
	I1024 19:00:54.808570   16652 main.go:141] libmachine: (addons-866342)   </os>
	I1024 19:00:54.808580   16652 main.go:141] libmachine: (addons-866342)   <devices>
	I1024 19:00:54.808586   16652 main.go:141] libmachine: (addons-866342)     <disk type='file' device='cdrom'>
	I1024 19:00:54.808597   16652 main.go:141] libmachine: (addons-866342)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/boot2docker.iso'/>
	I1024 19:00:54.808613   16652 main.go:141] libmachine: (addons-866342)       <target dev='hdc' bus='scsi'/>
	I1024 19:00:54.808620   16652 main.go:141] libmachine: (addons-866342)       <readonly/>
	I1024 19:00:54.808633   16652 main.go:141] libmachine: (addons-866342)     </disk>
	I1024 19:00:54.808643   16652 main.go:141] libmachine: (addons-866342)     <disk type='file' device='disk'>
	I1024 19:00:54.808656   16652 main.go:141] libmachine: (addons-866342)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1024 19:00:54.808669   16652 main.go:141] libmachine: (addons-866342)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/addons-866342.rawdisk'/>
	I1024 19:00:54.808678   16652 main.go:141] libmachine: (addons-866342)       <target dev='hda' bus='virtio'/>
	I1024 19:00:54.808686   16652 main.go:141] libmachine: (addons-866342)     </disk>
	I1024 19:00:54.808695   16652 main.go:141] libmachine: (addons-866342)     <interface type='network'>
	I1024 19:00:54.808702   16652 main.go:141] libmachine: (addons-866342)       <source network='mk-addons-866342'/>
	I1024 19:00:54.808714   16652 main.go:141] libmachine: (addons-866342)       <model type='virtio'/>
	I1024 19:00:54.808724   16652 main.go:141] libmachine: (addons-866342)     </interface>
	I1024 19:00:54.808730   16652 main.go:141] libmachine: (addons-866342)     <interface type='network'>
	I1024 19:00:54.808751   16652 main.go:141] libmachine: (addons-866342)       <source network='default'/>
	I1024 19:00:54.808771   16652 main.go:141] libmachine: (addons-866342)       <model type='virtio'/>
	I1024 19:00:54.808786   16652 main.go:141] libmachine: (addons-866342)     </interface>
	I1024 19:00:54.808799   16652 main.go:141] libmachine: (addons-866342)     <serial type='pty'>
	I1024 19:00:54.808810   16652 main.go:141] libmachine: (addons-866342)       <target port='0'/>
	I1024 19:00:54.808827   16652 main.go:141] libmachine: (addons-866342)     </serial>
	I1024 19:00:54.808838   16652 main.go:141] libmachine: (addons-866342)     <console type='pty'>
	I1024 19:00:54.808847   16652 main.go:141] libmachine: (addons-866342)       <target type='serial' port='0'/>
	I1024 19:00:54.808867   16652 main.go:141] libmachine: (addons-866342)     </console>
	I1024 19:00:54.808880   16652 main.go:141] libmachine: (addons-866342)     <rng model='virtio'>
	I1024 19:00:54.808895   16652 main.go:141] libmachine: (addons-866342)       <backend model='random'>/dev/random</backend>
	I1024 19:00:54.808932   16652 main.go:141] libmachine: (addons-866342)     </rng>
	I1024 19:00:54.808946   16652 main.go:141] libmachine: (addons-866342)     
	I1024 19:00:54.808968   16652 main.go:141] libmachine: (addons-866342)     
	I1024 19:00:54.808990   16652 main.go:141] libmachine: (addons-866342)   </devices>
	I1024 19:00:54.809013   16652 main.go:141] libmachine: (addons-866342) </domain>
	I1024 19:00:54.809029   16652 main.go:141] libmachine: (addons-866342) 
	I1024 19:00:54.814499   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:a9:fb:bb in network default
	I1024 19:00:54.815124   16652 main.go:141] libmachine: (addons-866342) Ensuring networks are active...
	I1024 19:00:54.815155   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:54.815811   16652 main.go:141] libmachine: (addons-866342) Ensuring network default is active
	I1024 19:00:54.816130   16652 main.go:141] libmachine: (addons-866342) Ensuring network mk-addons-866342 is active
	I1024 19:00:54.816597   16652 main.go:141] libmachine: (addons-866342) Getting domain xml...
	I1024 19:00:54.817251   16652 main.go:141] libmachine: (addons-866342) Creating domain...
	I1024 19:00:56.222041   16652 main.go:141] libmachine: (addons-866342) Waiting to get IP...
	I1024 19:00:56.222681   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:56.222988   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:56.223045   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:56.222981   16674 retry.go:31] will retry after 235.339237ms: waiting for machine to come up
	I1024 19:00:56.460449   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:56.460837   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:56.460857   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:56.460803   16674 retry.go:31] will retry after 375.487717ms: waiting for machine to come up
	I1024 19:00:56.838287   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:56.838659   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:56.838679   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:56.838609   16674 retry.go:31] will retry after 362.75156ms: waiting for machine to come up
	I1024 19:00:57.203285   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:57.203703   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:57.203726   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:57.203690   16674 retry.go:31] will retry after 600.274701ms: waiting for machine to come up
	I1024 19:00:57.805396   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:57.805777   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:57.805803   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:57.805738   16674 retry.go:31] will retry after 755.565775ms: waiting for machine to come up
	I1024 19:00:58.562657   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:58.563095   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:58.563124   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:58.563074   16674 retry.go:31] will retry after 792.580761ms: waiting for machine to come up
	I1024 19:00:59.357583   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:59.357901   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:59.357925   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:59.357843   16674 retry.go:31] will retry after 1.073478461s: waiting for machine to come up
	I1024 19:01:00.433104   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:00.433519   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:00.433547   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:00.433457   16674 retry.go:31] will retry after 1.342291864s: waiting for machine to come up
	I1024 19:01:01.777946   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:01.778301   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:01.778334   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:01.778248   16674 retry.go:31] will retry after 1.848774692s: waiting for machine to come up
	I1024 19:01:03.629233   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:03.629747   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:03.629768   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:03.629690   16674 retry.go:31] will retry after 2.253036424s: waiting for machine to come up
	I1024 19:01:05.885559   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:05.886049   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:05.886076   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:05.886015   16674 retry.go:31] will retry after 2.239298601s: waiting for machine to come up
	I1024 19:01:08.126420   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:08.126691   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:08.126744   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:08.126652   16674 retry.go:31] will retry after 2.332501495s: waiting for machine to come up
	I1024 19:01:10.461530   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:10.461831   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:10.461860   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:10.461793   16674 retry.go:31] will retry after 4.390039765s: waiting for machine to come up
	I1024 19:01:14.853207   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:14.853630   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:14.853655   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:14.853589   16674 retry.go:31] will retry after 4.206775238s: waiting for machine to come up
	I1024 19:01:19.062273   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.062676   16652 main.go:141] libmachine: (addons-866342) Found IP for machine: 192.168.39.163
	I1024 19:01:19.062727   16652 main.go:141] libmachine: (addons-866342) Reserving static IP address...
	I1024 19:01:19.062751   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has current primary IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.063058   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find host DHCP lease matching {name: "addons-866342", mac: "52:54:00:26:c1:28", ip: "192.168.39.163"} in network mk-addons-866342
	I1024 19:01:19.131763   16652 main.go:141] libmachine: (addons-866342) DBG | Getting to WaitForSSH function...
	I1024 19:01:19.131791   16652 main.go:141] libmachine: (addons-866342) Reserved static IP address: 192.168.39.163
	I1024 19:01:19.131835   16652 main.go:141] libmachine: (addons-866342) Waiting for SSH to be available...
	I1024 19:01:19.134337   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.134707   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.134739   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.134908   16652 main.go:141] libmachine: (addons-866342) DBG | Using SSH client type: external
	I1024 19:01:19.134936   16652 main.go:141] libmachine: (addons-866342) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa (-rw-------)
	I1024 19:01:19.134982   16652 main.go:141] libmachine: (addons-866342) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:01:19.134996   16652 main.go:141] libmachine: (addons-866342) DBG | About to run SSH command:
	I1024 19:01:19.135006   16652 main.go:141] libmachine: (addons-866342) DBG | exit 0
	I1024 19:01:19.277327   16652 main.go:141] libmachine: (addons-866342) DBG | SSH cmd err, output: <nil>: 
	I1024 19:01:19.277549   16652 main.go:141] libmachine: (addons-866342) KVM machine creation complete!
	I1024 19:01:19.277865   16652 main.go:141] libmachine: (addons-866342) Calling .GetConfigRaw
	I1024 19:01:19.278420   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:19.278604   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:19.278764   16652 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1024 19:01:19.278784   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:19.280117   16652 main.go:141] libmachine: Detecting operating system of created instance...
	I1024 19:01:19.280131   16652 main.go:141] libmachine: Waiting for SSH to be available...
	I1024 19:01:19.280137   16652 main.go:141] libmachine: Getting to WaitForSSH function...
	I1024 19:01:19.280144   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.281975   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.282291   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.282316   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.282448   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.282622   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.282758   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.282878   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.283007   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.283396   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.283410   16652 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1024 19:01:19.412538   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:01:19.412566   16652 main.go:141] libmachine: Detecting the provisioner...
	I1024 19:01:19.412577   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.415189   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.415502   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.415536   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.415650   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.415830   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.415986   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.416122   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.416290   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.416613   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.416628   16652 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1024 19:01:19.546090   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g71212f5-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1024 19:01:19.546162   16652 main.go:141] libmachine: found compatible host: buildroot
	I1024 19:01:19.546177   16652 main.go:141] libmachine: Provisioning with buildroot...
	I1024 19:01:19.546189   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:01:19.546420   16652 buildroot.go:166] provisioning hostname "addons-866342"
	I1024 19:01:19.546439   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:01:19.546622   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.549169   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.549524   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.549579   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.549685   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.549861   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.550002   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.550152   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.550372   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.550740   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.550758   16652 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-866342 && echo "addons-866342" | sudo tee /etc/hostname
	I1024 19:01:19.690042   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-866342
	
	I1024 19:01:19.690066   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.692641   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.693114   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.693149   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.693250   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.693407   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.693577   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.693715   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.693859   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.694182   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.694200   16652 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-866342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-866342/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-866342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:01:19.828475   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:01:19.828504   16652 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:01:19.828529   16652 buildroot.go:174] setting up certificates
	I1024 19:01:19.828538   16652 provision.go:83] configureAuth start
	I1024 19:01:19.828549   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:01:19.828773   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:19.831502   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.831850   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.831885   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.832000   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.834270   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.834643   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.834674   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.834811   16652 provision.go:138] copyHostCerts
	I1024 19:01:19.834863   16652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:01:19.835005   16652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:01:19.835088   16652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:01:19.835146   16652 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.addons-866342 san=[192.168.39.163 192.168.39.163 localhost 127.0.0.1 minikube addons-866342]
	I1024 19:01:19.938205   16652 provision.go:172] copyRemoteCerts
	I1024 19:01:19.938265   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:01:19.938288   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.940745   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.941073   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.941099   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.941316   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.941501   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.941649   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.941860   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.035343   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:01:20.059410   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1024 19:01:20.082392   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:01:20.105763   16652 provision.go:86] duration metric: configureAuth took 277.20792ms
	I1024 19:01:20.105793   16652 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:01:20.106026   16652 config.go:182] Loaded profile config "addons-866342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:20.106115   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.108682   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.109032   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.109077   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.109213   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.109406   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.109548   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.109652   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.109840   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:20.110232   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:20.110255   16652 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:01:20.469265   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:01:20.469288   16652 main.go:141] libmachine: Checking connection to Docker...
	I1024 19:01:20.469330   16652 main.go:141] libmachine: (addons-866342) Calling .GetURL
	I1024 19:01:20.470540   16652 main.go:141] libmachine: (addons-866342) DBG | Using libvirt version 6000000
	I1024 19:01:20.473891   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.474292   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.474324   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.474514   16652 main.go:141] libmachine: Docker is up and running!
	I1024 19:01:20.474528   16652 main.go:141] libmachine: Reticulating splines...
	I1024 19:01:20.474535   16652 client.go:171] LocalClient.Create took 26.772732668s
	I1024 19:01:20.474554   16652 start.go:167] duration metric: libmachine.API.Create for "addons-866342" took 26.772787359s
	I1024 19:01:20.474566   16652 start.go:300] post-start starting for "addons-866342" (driver="kvm2")
	I1024 19:01:20.474579   16652 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:01:20.474602   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.474832   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:01:20.474863   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.476800   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.477115   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.477157   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.477285   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.477449   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.477588   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.477711   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.570396   16652 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:01:20.574655   16652 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:01:20.574676   16652 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:01:20.574736   16652 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:01:20.574757   16652 start.go:303] post-start completed in 100.185858ms
	I1024 19:01:20.574789   16652 main.go:141] libmachine: (addons-866342) Calling .GetConfigRaw
	I1024 19:01:20.630665   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:20.633368   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.633685   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.633713   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.634079   16652 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/config.json ...
	I1024 19:01:20.634288   16652 start.go:128] duration metric: createHost completed in 26.949067524s
	I1024 19:01:20.634314   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.636708   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.637006   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.637036   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.637144   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.637354   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.637512   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.637663   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.637820   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:20.638154   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:20.638166   16652 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:01:20.770027   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698174080.754216177
	
	I1024 19:01:20.770046   16652 fix.go:206] guest clock: 1698174080.754216177
	I1024 19:01:20.770053   16652 fix.go:219] Guest: 2023-10-24 19:01:20.754216177 +0000 UTC Remote: 2023-10-24 19:01:20.634300487 +0000 UTC m=+27.067710926 (delta=119.91569ms)
	I1024 19:01:20.770072   16652 fix.go:190] guest clock delta is within tolerance: 119.91569ms
	I1024 19:01:20.770079   16652 start.go:83] releasing machines lock for "addons-866342", held for 27.084939654s
	I1024 19:01:20.770107   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.770358   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:20.773083   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.773425   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.773458   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.773629   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.774055   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.774215   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.774307   16652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:01:20.774351   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.774470   16652 ssh_runner.go:195] Run: cat /version.json
	I1024 19:01:20.774496   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.776892   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777136   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777260   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.777306   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777405   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.777517   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.777547   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777566   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.777722   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.777792   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.777963   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.777953   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.778123   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.778258   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.932612   16652 ssh_runner.go:195] Run: systemctl --version
	I1024 19:01:20.938736   16652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:01:21.597436   16652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:01:21.603674   16652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:01:21.603745   16652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:01:21.620393   16652 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:01:21.620414   16652 start.go:472] detecting cgroup driver to use...
	I1024 19:01:21.620474   16652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:01:21.637583   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:01:21.650249   16652 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:01:21.650318   16652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:01:21.663323   16652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:01:21.676403   16652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:01:21.779753   16652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:01:21.895677   16652 docker.go:214] disabling docker service ...
	I1024 19:01:21.895754   16652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:01:21.908479   16652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:01:21.920037   16652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:01:22.019018   16652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:01:22.136023   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:01:22.148608   16652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:01:22.165724   16652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:01:22.165776   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.174313   16652 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:01:22.174364   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.183213   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.191818   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.200747   16652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:01:22.209531   16652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:01:22.217144   16652 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:01:22.217178   16652 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:01:22.229466   16652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:01:22.237027   16652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:01:22.349394   16652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:01:22.510565   16652 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:01:22.510644   16652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:01:22.515621   16652 start.go:540] Will wait 60s for crictl version
	I1024 19:01:22.515669   16652 ssh_runner.go:195] Run: which crictl
	I1024 19:01:22.522282   16652 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:01:22.562264   16652 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:01:22.562363   16652 ssh_runner.go:195] Run: crio --version
	I1024 19:01:22.605750   16652 ssh_runner.go:195] Run: crio --version
	I1024 19:01:22.663303   16652 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:01:22.665251   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:22.668301   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:22.668630   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:22.668670   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:22.668815   16652 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:01:22.672932   16652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:01:22.685527   16652 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:01:22.685580   16652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:01:22.718198   16652 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 19:01:22.718257   16652 ssh_runner.go:195] Run: which lz4
	I1024 19:01:22.722016   16652 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:01:22.725953   16652 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:01:22.725981   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 19:01:24.471185   16652 crio.go:444] Took 1.749190 seconds to copy over tarball
	I1024 19:01:24.471252   16652 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:01:27.424688   16652 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.953407338s)
	I1024 19:01:27.424715   16652 crio.go:451] Took 2.953507 seconds to extract the tarball
	I1024 19:01:27.424723   16652 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:01:27.465621   16652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:01:27.536656   16652 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:01:27.536682   16652 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:01:27.536768   16652 ssh_runner.go:195] Run: crio config
	I1024 19:01:27.603082   16652 cni.go:84] Creating CNI manager for ""
	I1024 19:01:27.603106   16652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:01:27.603129   16652 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:01:27.603151   16652 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-866342 NodeName:addons-866342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:01:27.603323   16652 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-866342"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:01:27.603416   16652 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-866342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:01:27.603497   16652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:01:27.612394   16652 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:01:27.612459   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:01:27.620511   16652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1024 19:01:27.637001   16652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:01:27.654181   16652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1024 19:01:27.671009   16652 ssh_runner.go:195] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I1024 19:01:27.674681   16652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:01:27.685472   16652 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342 for IP: 192.168.39.163
	I1024 19:01:27.685511   16652 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.685629   16652 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:01:27.781869   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt ...
	I1024 19:01:27.781899   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt: {Name:mk5986d412e7800237b3efcd0cbb9849437180c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.782051   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key ...
	I1024 19:01:27.782061   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key: {Name:mkfff13cbfa1679f2c22954f13a806f8b04b8c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.782129   16652 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:01:27.895659   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt ...
	I1024 19:01:27.895684   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt: {Name:mkfa7ee4955395e6d99ed1452389a5750c3b1665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.895812   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key ...
	I1024 19:01:27.895822   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key: {Name:mk489124e20b3e297af3411bd0d812f2e771776f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.895924   16652 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.key
	I1024 19:01:27.895938   16652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt with IP's: []
	I1024 19:01:28.035476   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt ...
	I1024 19:01:28.035505   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: {Name:mk1a541f9512dfeb8d36c62970e267637fe02fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.035642   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.key ...
	I1024 19:01:28.035653   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.key: {Name:mk302e4370628b1ce6f2b5b21c790bd66ebab1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.035716   16652 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8
	I1024 19:01:28.035734   16652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8 with IP's: [192.168.39.163 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:01:28.181446   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8 ...
	I1024 19:01:28.181471   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8: {Name:mk69844b9e5c4ab3149565250143ae625374bad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.181609   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8 ...
	I1024 19:01:28.181619   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8: {Name:mk8ecef18ebccb05b2d420f450e8ebd230667ad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.181682   16652 certs.go:337] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt
	I1024 19:01:28.181762   16652 certs.go:341] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key
	I1024 19:01:28.181809   16652 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key
	I1024 19:01:28.181824   16652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt with IP's: []
	I1024 19:01:28.304931   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt ...
	I1024 19:01:28.304955   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt: {Name:mkb4d56decaceefc744af9b1328b7073d4ce7707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.305088   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key ...
	I1024 19:01:28.305100   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key: {Name:mk2931679b2b1bb2ba63ae9a95bb1a04e4212768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.305261   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:01:28.305315   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:01:28.305350   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:01:28.305383   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:01:28.305915   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:01:28.332178   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:01:28.354638   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:01:28.376739   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:01:28.399183   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:01:28.421963   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:01:28.444449   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:01:28.469716   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:01:28.492307   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:01:28.516336   16652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:01:28.536768   16652 ssh_runner.go:195] Run: openssl version
	I1024 19:01:28.542636   16652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:01:28.551917   16652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:28.556610   16652 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:28.556763   16652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:28.562604   16652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:01:28.572374   16652 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:01:28.576716   16652 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:01:28.576756   16652 kubeadm.go:404] StartCluster: {Name:addons-866342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:01:28.576824   16652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:01:28.576867   16652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:01:28.621983   16652 cri.go:89] found id: ""
	I1024 19:01:28.749064   16652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:01:28.758460   16652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:01:28.766781   16652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:01:28.774943   16652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:01:28.774985   16652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1024 19:01:28.942631   16652 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:01:41.071771   16652 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:01:41.071846   16652 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:01:41.071931   16652 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:01:41.072041   16652 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:01:41.072220   16652 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:01:41.072303   16652 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:01:41.074115   16652 out.go:204]   - Generating certificates and keys ...
	I1024 19:01:41.074209   16652 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:01:41.074268   16652 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:01:41.074324   16652 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:01:41.074402   16652 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:01:41.074454   16652 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:01:41.074501   16652 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:01:41.074574   16652 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:01:41.074734   16652 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-866342 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I1024 19:01:41.074800   16652 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:01:41.074943   16652 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-866342 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I1024 19:01:41.075013   16652 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:01:41.075110   16652 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:01:41.075186   16652 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:01:41.075253   16652 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:01:41.075332   16652 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:01:41.075410   16652 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:01:41.075520   16652 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:01:41.075574   16652 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:01:41.075640   16652 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:01:41.075712   16652 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:01:41.078275   16652 out.go:204]   - Booting up control plane ...
	I1024 19:01:41.078378   16652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:01:41.078483   16652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:01:41.078552   16652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:01:41.078656   16652 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:01:41.078758   16652 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:01:41.078799   16652 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:01:41.078962   16652 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:01:41.079065   16652 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002101 seconds
	I1024 19:01:41.079212   16652 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:01:41.079368   16652 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:01:41.079418   16652 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:01:41.079631   16652 kubeadm.go:322] [mark-control-plane] Marking the node addons-866342 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:01:41.079688   16652 kubeadm.go:322] [bootstrap-token] Using token: a0j6ox.ibf86dwwapxuzwwq
	I1024 19:01:41.081277   16652 out.go:204]   - Configuring RBAC rules ...
	I1024 19:01:41.081410   16652 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:01:41.081507   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:01:41.081660   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:01:41.081776   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:01:41.081866   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:01:41.081931   16652 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:01:41.082029   16652 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:01:41.082089   16652 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:01:41.082158   16652 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:01:41.082170   16652 kubeadm.go:322] 
	I1024 19:01:41.082284   16652 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:01:41.082310   16652 kubeadm.go:322] 
	I1024 19:01:41.082408   16652 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:01:41.082415   16652 kubeadm.go:322] 
	I1024 19:01:41.082434   16652 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:01:41.082483   16652 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:01:41.082528   16652 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:01:41.082537   16652 kubeadm.go:322] 
	I1024 19:01:41.082598   16652 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:01:41.082615   16652 kubeadm.go:322] 
	I1024 19:01:41.082695   16652 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:01:41.082705   16652 kubeadm.go:322] 
	I1024 19:01:41.082775   16652 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:01:41.082838   16652 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:01:41.082899   16652 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:01:41.082905   16652 kubeadm.go:322] 
	I1024 19:01:41.082969   16652 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:01:41.083061   16652 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:01:41.083076   16652 kubeadm.go:322] 
	I1024 19:01:41.083148   16652 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a0j6ox.ibf86dwwapxuzwwq \
	I1024 19:01:41.083230   16652 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 19:01:41.083248   16652 kubeadm.go:322] 	--control-plane 
	I1024 19:01:41.083251   16652 kubeadm.go:322] 
	I1024 19:01:41.083316   16652 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:01:41.083323   16652 kubeadm.go:322] 
	I1024 19:01:41.083384   16652 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a0j6ox.ibf86dwwapxuzwwq \
	I1024 19:01:41.083547   16652 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:01:41.083562   16652 cni.go:84] Creating CNI manager for ""
	I1024 19:01:41.083568   16652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:01:41.086053   16652 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:01:41.087442   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:01:41.112058   16652 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:01:41.160187   16652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:01:41.160266   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.160293   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=addons-866342 minikube.k8s.io/updated_at=2023_10_24T19_01_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.214190   16652 ops.go:34] apiserver oom_adj: -16
	I1024 19:01:41.323047   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.433148   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:42.037613   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:42.537263   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:43.037986   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:43.537170   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:44.037466   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:44.537338   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:45.037949   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:45.537159   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:46.037753   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:46.537147   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:47.037719   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:47.537042   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:48.037582   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:48.537805   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:49.037016   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:49.537656   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:50.037866   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:50.537863   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:51.037558   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:51.537874   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:52.037253   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:52.537520   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:53.037052   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:53.156643   16652 kubeadm.go:1081] duration metric: took 11.996430206s to wait for elevateKubeSystemPrivileges.
	I1024 19:01:53.156671   16652 kubeadm.go:406] StartCluster complete in 24.579917364s
	I1024 19:01:53.156687   16652 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:53.156803   16652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:01:53.157191   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:53.157402   16652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:01:53.157493   16652 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1024 19:01:53.157582   16652 addons.go:69] Setting volumesnapshots=true in profile "addons-866342"
	I1024 19:01:53.157599   16652 addons.go:69] Setting default-storageclass=true in profile "addons-866342"
	I1024 19:01:53.157601   16652 addons.go:69] Setting ingress-dns=true in profile "addons-866342"
	I1024 19:01:53.157617   16652 addons.go:69] Setting registry=true in profile "addons-866342"
	I1024 19:01:53.157619   16652 addons.go:231] Setting addon ingress-dns=true in "addons-866342"
	I1024 19:01:53.157625   16652 config.go:182] Loaded profile config "addons-866342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:53.157638   16652 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-866342"
	I1024 19:01:53.157642   16652 addons.go:69] Setting storage-provisioner=true in profile "addons-866342"
	I1024 19:01:53.157653   16652 addons.go:231] Setting addon storage-provisioner=true in "addons-866342"
	I1024 19:01:53.157643   16652 addons.go:69] Setting inspektor-gadget=true in profile "addons-866342"
	I1024 19:01:53.157628   16652 addons.go:231] Setting addon registry=true in "addons-866342"
	I1024 19:01:53.157665   16652 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-866342"
	I1024 19:01:53.157667   16652 addons.go:231] Setting addon inspektor-gadget=true in "addons-866342"
	I1024 19:01:53.157674   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157696   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157697   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157707   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157709   16652 addons.go:69] Setting metrics-server=true in profile "addons-866342"
	I1024 19:01:53.157719   16652 addons.go:231] Setting addon metrics-server=true in "addons-866342"
	I1024 19:01:53.157747   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158095   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158096   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158103   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158095   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158111   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158126   16652 addons.go:69] Setting helm-tiller=true in profile "addons-866342"
	I1024 19:01:53.158141   16652 addons.go:231] Setting addon helm-tiller=true in "addons-866342"
	I1024 19:01:53.158143   16652 addons.go:69] Setting gcp-auth=true in profile "addons-866342"
	I1024 19:01:53.158149   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158162   16652 mustload.go:65] Loading cluster: addons-866342
	I1024 19:01:53.158174   16652 addons.go:69] Setting ingress=true in profile "addons-866342"
	I1024 19:01:53.157629   16652 addons.go:69] Setting cloud-spanner=true in profile "addons-866342"
	I1024 19:01:53.158181   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158186   16652 addons.go:231] Setting addon ingress=true in "addons-866342"
	I1024 19:01:53.158189   16652 addons.go:231] Setting addon cloud-spanner=true in "addons-866342"
	I1024 19:01:53.158175   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158223   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158131   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.157608   16652 addons.go:231] Setting addon volumesnapshots=true in "addons-866342"
	I1024 19:01:53.158326   16652 config.go:182] Loaded profile config "addons-866342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:53.158525   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158535   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.157619   16652 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-866342"
	I1024 19:01:53.158552   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158600   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.157697   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158641   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158669   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158163   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158753   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158802   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158877   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158899   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158926   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158128   16652 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-866342"
	I1024 19:01:53.158941   16652 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-866342"
	I1024 19:01:53.158944   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.157623   16652 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-866342"
	I1024 19:01:53.158991   16652 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-866342"
	I1024 19:01:53.159179   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.159251   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.159268   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.159292   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.159542   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.159574   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.178136   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I1024 19:01:53.178998   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.179518   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.179538   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.179898   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.180113   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.180820   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I1024 19:01:53.180979   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I1024 19:01:53.181392   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.181949   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.181966   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.182364   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.182948   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.182984   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.183423   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.183792   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.183820   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.184419   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.184964   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.184994   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.185339   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.185520   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.187919   16652 addons.go:231] Setting addon default-storageclass=true in "addons-866342"
	I1024 19:01:53.187954   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.188313   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.188343   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.188521   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I1024 19:01:53.196209   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39725
	I1024 19:01:53.196352   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I1024 19:01:53.196441   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I1024 19:01:53.196779   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.197483   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.197737   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.197782   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.197789   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.197854   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.197887   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.197967   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.197981   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.198222   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.198389   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.198414   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.198656   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.198672   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.199120   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.199194   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.199212   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.199271   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.199311   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.199742   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.199769   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.199897   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.199930   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.199995   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I1024 19:01:53.200640   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.200673   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.200880   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.201396   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.201430   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.201623   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.202059   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.202075   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.202367   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.202818   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.202845   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.213887   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1024 19:01:53.213962   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I1024 19:01:53.214014   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I1024 19:01:53.214374   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.214468   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.214780   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.215238   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.215254   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.215364   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.215376   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.215485   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.215496   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.215838   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.216364   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.216408   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.216842   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.217355   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.217387   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.217828   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.217990   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.231014   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45837
	I1024 19:01:53.231492   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.231588   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I1024 19:01:53.232072   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.232511   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.232530   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.232651   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.232662   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.232979   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.233267   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.233857   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.233895   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.234040   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.234094   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.234336   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I1024 19:01:53.235114   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.235182   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I1024 19:01:53.235586   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.236033   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.236052   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.236402   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.236536   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.236819   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.236836   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.236897   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I1024 19:01:53.237204   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.237284   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.237518   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.237699   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.237715   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.237772   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33545
	I1024 19:01:53.238142   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.238195   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.238402   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.238563   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.238582   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.238597   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.239132   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.240655   16652 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1024 19:01:53.239431   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.239554   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.240983   16652 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-866342"
	I1024 19:01:53.244713   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1024 19:01:53.244729   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I1024 19:01:53.245145   16652 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:01:53.245157   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1024 19:01:53.245175   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.245261   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.245677   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.245707   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.246874   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I1024 19:01:53.248548   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1024 19:01:53.247504   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.247505   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.247809   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.249037   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I1024 19:01:53.251664   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1024 19:01:53.250443   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.250476   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.250512   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.250590   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.250638   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.250777   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.251274   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.253153   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1024 19:01:53.251742   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.251755   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.251779   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.251796   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.252277   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.252294   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.252756   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I1024 19:01:53.254519   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.254587   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.254843   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.254930   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.254958   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.255090   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.255677   16652 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1024 19:01:53.256030   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.256887   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1024 19:01:53.257414   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.257493   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.257510   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.257956   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.258769   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.258794   16652 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1024 19:01:53.258807   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1024 19:01:53.258826   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.258861   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.258889   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.257981   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.259064   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.258242   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.258468   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I1024 19:01:53.258724   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1024 19:01:53.259468   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.259660   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.259702   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.260528   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.260909   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
	I1024 19:01:53.262557   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1024 19:01:53.263218   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.263248   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I1024 19:01:53.264174   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.264456   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1024 19:01:53.264467   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.265556   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1024 19:01:53.266759   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:53.265579   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.265235   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.265273   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.265459   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.264717   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.264909   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.266144   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.268012   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.268356   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.268428   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1024 19:01:53.269587   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.269969   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.270831   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:53.271029   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.271197   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.271924   16652 out.go:177]   - Using image docker.io/registry:2.8.3
	I1024 19:01:53.272291   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.272925   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I1024 19:01:53.273019   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1024 19:01:53.273396   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.274203   16652 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1024 19:01:53.275576   16652 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1024 19:01:53.275593   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1024 19:01:53.274194   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.275609   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.276972   16652 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:01:53.276985   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1024 19:01:53.276998   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.274490   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.274566   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.274677   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.274698   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.274726   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.276112   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.278490   16652 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1024 19:01:53.278574   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.278620   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1024 19:01:53.281460   16652 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1024 19:01:53.281470   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1024 19:01:53.281492   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.281460   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1024 19:01:53.281526   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.279521   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.281589   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.279190   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.282181   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.282276   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.282463   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.282518   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.283191   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.283795   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I1024 19:01:53.283966   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.284031   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.284334   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I1024 19:01:53.286221   16652 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:01:53.284357   16652 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:01:53.284390   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.284677   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.285213   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.285235   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.286043   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.286148   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.286474   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.286617   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.287639   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:01:53.287654   16652 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:01:53.287668   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:01:53.287684   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.287657   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.287730   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.287741   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.287749   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.287763   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.287783   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.287803   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.288336   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.288365   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.288420   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.288431   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.288449   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.288458   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.288486   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.288509   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.289859   16652 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1024 19:01:53.291015   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:01:53.291028   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:01:53.291044   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.288972   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.288996   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.292340   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1024 19:01:53.289059   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.289062   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.289073   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.289081   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.289212   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.291312   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.291335   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.291359   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.292103   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.292132   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.292403   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.293585   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1024 19:01:53.293595   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1024 19:01:53.293611   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.293644   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.293682   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.293699   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.293724   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.293744   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.294392   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.294409   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294478   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294515   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.294535   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.294573   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294612   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294949   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.294970   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.295019   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.295030   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.295063   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.295078   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.295091   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.295187   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.295235   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.295675   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.295920   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.295937   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.295918   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.296361   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.296417   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.296500   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.298035   16652 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1024 19:01:53.297489   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.297923   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.300495   16652 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:01:53.299309   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.300513   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1024 19:01:53.300527   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.300534   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.299321   16652 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1024 19:01:53.299500   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.301953   16652 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1024 19:01:53.301966   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1024 19:01:53.301986   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.301995   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.302098   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.303048   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.303357   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.303375   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.303514   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.303674   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.303785   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.303916   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	W1024 19:01:53.304750   16652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43222->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.304778   16652 retry.go:31] will retry after 200.567523ms: ssh: handshake failed: read tcp 192.168.39.1:43222->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.304849   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.305250   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.305268   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.305469   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.305626   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.305773   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.305895   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	W1024 19:01:53.306797   16652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43236->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.306818   16652 retry.go:31] will retry after 257.504283ms: ssh: handshake failed: read tcp 192.168.39.1:43236->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.308758   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I1024 19:01:53.309057   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.309491   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.309509   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.309831   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.310008   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.311355   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.312921   16652 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1024 19:01:53.314352   16652 out.go:177]   - Using image docker.io/busybox:stable
	I1024 19:01:53.315604   16652 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:01:53.315614   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1024 19:01:53.315626   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.317998   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.318308   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.318321   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.318448   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.318594   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.318720   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.318855   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.444095   16652 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-866342" context rescaled to 1 replicas
	I1024 19:01:53.444138   16652 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:01:53.445780   16652 out.go:177] * Verifying Kubernetes components...
	I1024 19:01:53.447875   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:01:53.461861   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:01:53.473443   16652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:01:53.485254   16652 node_ready.go:35] waiting up to 6m0s for node "addons-866342" to be "Ready" ...
	I1024 19:01:53.527192   16652 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1024 19:01:53.527212   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1024 19:01:53.534833   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:01:53.604120   16652 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1024 19:01:53.604143   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1024 19:01:53.610586   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:01:53.616641   16652 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1024 19:01:53.616663   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1024 19:01:53.623559   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:01:53.623576   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1024 19:01:53.623720   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:01:53.631599   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:01:53.646077   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1024 19:01:53.646098   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1024 19:01:53.671240   16652 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1024 19:01:53.671259   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1024 19:01:53.681917   16652 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1024 19:01:53.681941   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1024 19:01:53.715877   16652 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1024 19:01:53.715894   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1024 19:01:53.762595   16652 node_ready.go:49] node "addons-866342" has status "Ready":"True"
	I1024 19:01:53.762615   16652 node_ready.go:38] duration metric: took 277.327308ms waiting for node "addons-866342" to be "Ready" ...
	I1024 19:01:53.762624   16652 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:01:53.803167   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:01:53.803185   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:01:53.819970   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1024 19:01:53.819996   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1024 19:01:53.841504   16652 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1024 19:01:53.841532   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1024 19:01:53.893178   16652 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:01:53.893202   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1024 19:01:53.905813   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1024 19:01:53.912073   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1024 19:01:53.917237   16652 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1024 19:01:53.917254   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1024 19:01:53.933071   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:01:53.946730   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:01:53.946750   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:01:53.974195   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1024 19:01:53.974215   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1024 19:01:54.008333   16652 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1024 19:01:54.008353   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1024 19:01:54.094615   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:01:54.152965   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:01:54.158271   16652 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1024 19:01:54.158287   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1024 19:01:54.160847   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1024 19:01:54.160867   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1024 19:01:54.189404   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1024 19:01:54.189421   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1024 19:01:54.277353   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1024 19:01:54.277376   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1024 19:01:54.283319   16652 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:54.283342   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1024 19:01:54.294904   16652 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1024 19:01:54.294927   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1024 19:01:54.359955   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1024 19:01:54.359980   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1024 19:01:54.378805   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:54.397269   16652 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1024 19:01:54.397291   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1024 19:01:54.467878   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1024 19:01:54.467901   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1024 19:01:54.513926   16652 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:01:54.513946   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1024 19:01:54.555510   16652 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace to be "Ready" ...
	I1024 19:01:54.562615   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1024 19:01:54.562631   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1024 19:01:54.581769   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:01:54.631207   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1024 19:01:54.631226   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1024 19:01:54.703707   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:01:54.703733   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1024 19:01:54.764501   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:01:56.626742   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:01:58.708673   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:01:59.179931   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.71803685s)
	I1024 19:01:59.179998   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:01:59.179999   16652 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.706524506s)
	I1024 19:01:59.180015   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:01:59.180022   16652 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 19:01:59.180289   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:01:59.180303   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:01:59.180306   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:01:59.180321   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:01:59.180332   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:01:59.180692   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:01:59.180708   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:01:59.180722   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:00.233180   16652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1024 19:02:00.233220   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:02:00.236342   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.236796   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:02:00.236828   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.237041   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:02:00.237265   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:02:00.237451   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:02:00.237591   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:02:00.610490   16652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1024 19:02:00.862168   16652 addons.go:231] Setting addon gcp-auth=true in "addons-866342"
	I1024 19:02:00.862234   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:02:00.862665   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:02:00.862710   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:02:00.878295   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I1024 19:02:00.878699   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:02:00.879272   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:02:00.879288   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:02:00.879574   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:02:00.880186   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:02:00.880234   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:02:00.920393   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
	I1024 19:02:00.920944   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:02:00.921492   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:02:00.921518   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:02:00.921804   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:02:00.922017   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:02:00.923718   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:02:00.923930   16652 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1024 19:02:00.923955   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:02:00.927096   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.927566   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:02:00.927600   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.927773   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:02:00.927950   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:02:00.928125   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:02:00.928313   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:02:01.061409   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:01.847908   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.313037131s)
	I1024 19:02:01.847960   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.237348403s)
	I1024 19:02:01.847992   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848004   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848014   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.224273755s)
	I1024 19:02:01.848032   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848046   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.847962   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848081   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848115   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.2164986s)
	I1024 19:02:01.848131   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848142   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848230   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.942382392s)
	I1024 19:02:01.848346   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848285   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848288   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848381   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848389   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848354   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848436   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848391   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.91529199s)
	I1024 19:02:01.848496   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848499   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848302   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848323   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.936227839s)
	I1024 19:02:01.848509   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848520   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848528   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848531   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.753890601s)
	I1024 19:02:01.848255   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848548   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848551   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848557   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848561   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848566   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848395   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848599   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848608   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848607   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.695612886s)
	I1024 19:02:01.848623   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848631   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848759   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.469924232s)
	W1024 19:02:01.848794   16652 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:02:01.848818   16652 retry.go:31] will retry after 163.134961ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:02:01.848861   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848874   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848884   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848889   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.267091083s)
	I1024 19:02:01.848933   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848942   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848955   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848894   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848966   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848975   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848986   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848475   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.849016   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.849027   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.849037   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848956   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.849230   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848422   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.849284   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.849253   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.849609   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.849619   16652 addons.go:467] Verifying addon metrics-server=true in "addons-866342"
	I1024 19:02:01.849941   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.849971   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.849970   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.849986   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.849990   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850002   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.850125   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850146   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850153   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.850160   16652 addons.go:467] Verifying addon ingress=true in "addons-866342"
	I1024 19:02:01.850174   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850195   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.850205   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.850213   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.853288   16652 out.go:177] * Verifying ingress addon...
	I1024 19:02:01.850258   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850278   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850294   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850315   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850641   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850803   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.851135   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.851276   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.852134   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.852160   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.853351   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.853362   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855110   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.855143   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.853372   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.853382   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855173   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.855181   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.855190   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.855192   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.853394   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.853998   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.854033   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855322   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855379   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.855403   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855411   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855446   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855452   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855460   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855461   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855487   16652 addons.go:467] Verifying addon registry=true in "addons-866342"
	I1024 19:02:01.857049   16652 out.go:177] * Verifying registry addon...
	I1024 19:02:01.855568   16652 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1024 19:02:01.859162   16652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1024 19:02:01.888788   16652 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1024 19:02:01.888808   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:01.897609   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.897626   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.897857   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.897876   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.897879   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	W1024 19:02:01.897968   16652 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1024 19:02:01.902519   16652 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:02:01.902547   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:01.912796   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:01.913870   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.913892   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.914163   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.914182   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.918195   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:02.013121   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:02:02.437278   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:02.477489   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:02.682036   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.917481006s)
	I1024 19:02:02.682090   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:02.682104   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:02.682042   16652 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.758089174s)
	I1024 19:02:02.683823   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:02:02.682436   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:02.682507   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:02.685730   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:02.685749   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:02.687493   16652 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1024 19:02:02.685767   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:02.689021   16652 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1024 19:02:02.689037   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1024 19:02:02.689303   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:02.689323   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:02.689324   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:02.689343   16652 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-866342"
	I1024 19:02:02.691046   16652 out.go:177] * Verifying csi-hostpath-driver addon...
	I1024 19:02:02.693601   16652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1024 19:02:02.811003   16652 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1024 19:02:02.811025   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1024 19:02:02.862440   16652 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:02:02.862460   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1024 19:02:02.865285   16652 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:02:02.865312   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:02.912907   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:02:02.993525   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.030268   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:03.076563   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.395797   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:03.425517   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.431251   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.541523   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:03.917762   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.950850   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.040300   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:04.417362   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:04.439538   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.540895   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:04.749452   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.736272852s)
	I1024 19:02:04.749526   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:04.749542   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:04.749947   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:04.749957   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:04.749974   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:04.749992   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:04.750003   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:04.750290   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:04.750302   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:04.750318   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:04.965977   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.966313   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.104083   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:05.114436   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.201485104s)
	I1024 19:02:05.114500   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:05.114513   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:05.114807   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:05.114828   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:05.114830   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:05.114838   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:05.114846   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:05.115070   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:05.115083   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:05.116499   16652 addons.go:467] Verifying addon gcp-auth=true in "addons-866342"
	I1024 19:02:05.118426   16652 out.go:177] * Verifying gcp-auth addon...
	I1024 19:02:05.120910   16652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1024 19:02:05.154162   16652 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1024 19:02:05.154186   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.209236   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.417279   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.423518   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:05.537033   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:05.715339   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.880314   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:05.918162   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.922762   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:06.040050   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:06.215201   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:06.429982   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:06.444250   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:06.547959   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:06.713539   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:06.918324   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:06.922532   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:07.037463   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:07.212801   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:07.417560   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:07.423347   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:07.542324   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:07.712785   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:07.886198   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:07.921150   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:07.927858   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:08.036270   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:08.213193   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:08.418307   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:08.422769   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:08.536747   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:08.714031   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:08.917413   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:08.927188   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:09.063958   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:09.212839   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:09.418065   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:09.424123   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:09.539060   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:09.714877   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:09.917512   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:09.923418   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:10.052322   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:10.231780   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:10.380149   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:10.418386   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.422142   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:10.540361   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:10.713514   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:10.918376   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.926579   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:11.035912   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:11.213484   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:11.417742   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:11.423073   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:11.535361   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:11.715415   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:11.918839   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:11.925895   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:12.040734   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:12.215521   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:12.380314   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:12.417512   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.426004   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:12.537031   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:12.713565   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:12.919402   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.927491   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:13.037304   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:13.214135   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:13.418770   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:13.424893   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:13.542099   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:13.713443   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:13.921849   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:13.924209   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:14.040463   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:14.213632   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:14.383489   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:14.417814   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:14.424460   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:14.546417   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:14.715809   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:14.934758   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:14.938061   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:15.050659   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:15.213479   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:15.417436   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:15.458857   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:15.541519   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:15.713587   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:15.917868   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:15.927135   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:16.036944   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:16.214260   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:16.398875   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:16.417983   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:16.423399   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:16.538585   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:16.713329   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:16.917241   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:16.923402   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:17.037644   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:17.213378   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:17.417736   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.423799   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:17.536137   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:17.826939   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:17.920657   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.926501   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:18.041162   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:18.213941   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:18.418251   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:18.436346   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:18.539615   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:18.713194   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:18.893180   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:18.925770   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:18.926834   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.038477   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:19.218119   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:19.427276   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:19.437386   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.536629   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:19.713629   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:19.917773   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:19.923590   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:20.041243   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:20.213519   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:20.416740   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:20.423326   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:20.537892   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:20.713900   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:20.917772   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:20.923668   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:21.035534   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:21.213255   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:21.380083   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:21.417039   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:21.422884   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:21.542320   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:21.713736   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:21.918027   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:21.922687   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:22.036748   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:22.213795   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:22.417518   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:22.423216   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:22.543439   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:22.713699   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:22.918174   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:22.924831   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:23.052477   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:23.214315   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:23.390454   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:23.417732   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.423322   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:23.537346   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:23.716404   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:23.917611   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.922819   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:24.036229   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:24.213269   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:24.418736   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:24.425658   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:24.536225   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:24.713357   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:24.918002   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:24.923899   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:25.036029   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:25.213897   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:25.686652   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:25.698919   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:25.699388   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:25.701768   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:25.713207   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:25.917974   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:25.923112   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:26.036735   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:26.213558   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:26.417647   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:26.423088   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:26.537311   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:26.714160   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:26.918171   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:26.922736   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.036857   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:27.212573   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:27.417975   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:27.423202   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.536432   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:27.713895   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:27.880647   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:27.918500   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:27.927185   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:28.036938   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:28.217420   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:28.418329   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.422580   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:28.535922   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:28.712548   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:28.917697   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.923954   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:29.038110   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:29.213698   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:29.696670   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:29.696723   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:29.697700   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:29.714229   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:29.917572   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:29.923331   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.036975   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:30.213142   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:30.387201   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:30.418850   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:30.427790   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.536084   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:30.715112   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:30.917666   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:30.922831   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:31.041103   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:31.213055   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:31.416693   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:31.422979   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:31.536668   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:31.715288   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.046130   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:32.046557   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.049023   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:32.214019   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.416923   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.422050   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:32.537816   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:32.713259   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.881129   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:32.920272   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.933083   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:33.036300   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:33.212961   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:33.418532   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:33.422756   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:33.535504   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:33.713030   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:33.916843   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:33.923034   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:34.036511   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:34.213662   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:34.418384   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:34.427395   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:34.536934   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:34.713273   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:34.918181   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:34.922593   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.035822   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:35.213825   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:35.383938   16652 pod_ready.go:92] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.383966   16652 pod_ready.go:81] duration metric: took 40.828432492s waiting for pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.383978   16652 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.389530   16652 pod_ready.go:92] pod "etcd-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.389555   16652 pod_ready.go:81] duration metric: took 5.568749ms waiting for pod "etcd-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.389566   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.398208   16652 pod_ready.go:92] pod "kube-apiserver-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.398229   16652 pod_ready.go:81] duration metric: took 8.655814ms waiting for pod "kube-apiserver-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.398241   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.405370   16652 pod_ready.go:92] pod "kube-controller-manager-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.405388   16652 pod_ready.go:81] duration metric: took 7.139653ms waiting for pod "kube-controller-manager-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.405399   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hz7fb" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.413033   16652 pod_ready.go:92] pod "kube-proxy-hz7fb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.413053   16652 pod_ready.go:81] duration metric: took 7.647033ms waiting for pod "kube-proxy-hz7fb" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.413063   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.420604   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:35.423119   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.535965   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:35.714347   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:35.777563   16652 pod_ready.go:92] pod "kube-scheduler-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.777584   16652 pod_ready.go:81] duration metric: took 364.515224ms waiting for pod "kube-scheduler-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.777592   16652 pod_ready.go:38] duration metric: took 42.014959556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:02:35.777607   16652 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:02:35.777650   16652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:02:35.795226   16652 api_server.go:72] duration metric: took 42.351056782s to wait for apiserver process to appear ...
	I1024 19:02:35.795248   16652 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:02:35.795268   16652 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1024 19:02:35.800170   16652 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I1024 19:02:35.801251   16652 api_server.go:141] control plane version: v1.28.3
	I1024 19:02:35.801269   16652 api_server.go:131] duration metric: took 6.015528ms to wait for apiserver health ...
	I1024 19:02:35.801276   16652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:02:35.920302   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:35.925142   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.992362   16652 system_pods.go:59] 18 kube-system pods found
	I1024 19:02:35.992397   16652 system_pods.go:61] "coredns-5dd5756b68-btn4f" [1a65ce1f-1502-4afb-9739-3ff39aa260e7] Running
	I1024 19:02:35.992405   16652 system_pods.go:61] "csi-hostpath-attacher-0" [b79df6c1-4d3c-4ca3-9ad0-d832297c94c9] Running
	I1024 19:02:35.992412   16652 system_pods.go:61] "csi-hostpath-resizer-0" [83c0bd57-8a4c-438a-b200-5b32f8e2c490] Running
	I1024 19:02:35.992423   16652 system_pods.go:61] "csi-hostpathplugin-2x7pp" [413ba041-ddcd-4b11-8908-3fbaaf9f9128] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1024 19:02:35.992434   16652 system_pods.go:61] "etcd-addons-866342" [7b00fcf3-3c2d-4fbf-90d8-f67cc1775321] Running
	I1024 19:02:35.992442   16652 system_pods.go:61] "kube-apiserver-addons-866342" [74168ee9-8de6-40b7-b5f6-f5df5a682a6f] Running
	I1024 19:02:35.992451   16652 system_pods.go:61] "kube-controller-manager-addons-866342" [43cfb66d-8302-46f0-9dcc-4f33a6f205ce] Running
	I1024 19:02:35.992461   16652 system_pods.go:61] "kube-ingress-dns-minikube" [5d55372e-c8e4-4e55-b251-9dad4fad9890] Running
	I1024 19:02:35.992467   16652 system_pods.go:61] "kube-proxy-hz7fb" [cd6d9bae-e261-4141-9430-b0bfaf748547] Running
	I1024 19:02:35.992474   16652 system_pods.go:61] "kube-scheduler-addons-866342" [84855ad7-d7ae-469a-b5cc-d6bff4f4d483] Running
	I1024 19:02:35.992493   16652 system_pods.go:61] "metrics-server-7c66d45ddc-r2sdc" [216942df-99c1-4c92-b8bd-f0594dbb6894] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:02:35.992505   16652 system_pods.go:61] "nvidia-device-plugin-daemonset-kcrfw" [56d67427-465c-406a-a425-3ded489815e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1024 19:02:35.992520   16652 system_pods.go:61] "registry-9fjkv" [16c9f9e1-0151-4045-bb71-6e31267e58df] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1024 19:02:35.992530   16652 system_pods.go:61] "registry-proxy-8jqwg" [bd54e9d3-a6ec-43ec-910e-38ddb0de2574] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1024 19:02:35.992541   16652 system_pods.go:61] "snapshot-controller-58dbcc7b99-5hc9g" [68ab6123-ccb9-4af7-aa9d-dc523a62522a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:35.992554   16652 system_pods.go:61] "snapshot-controller-58dbcc7b99-gdslt" [4ba3a215-6f34-45d8-90ab-e2823003d8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:35.992565   16652 system_pods.go:61] "storage-provisioner" [e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25] Running
	I1024 19:02:35.992577   16652 system_pods.go:61] "tiller-deploy-7b677967b9-mzrhm" [3653bdf1-8b0f-4839-abe0-48a7faadeb74] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1024 19:02:35.992589   16652 system_pods.go:74] duration metric: took 191.306726ms to wait for pod list to return data ...
	I1024 19:02:35.992601   16652 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:02:36.036363   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:36.178035   16652 default_sa.go:45] found service account: "default"
	I1024 19:02:36.178063   16652 default_sa.go:55] duration metric: took 185.451836ms for default service account to be created ...
	I1024 19:02:36.178074   16652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:02:36.214051   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:36.386279   16652 system_pods.go:86] 18 kube-system pods found
	I1024 19:02:36.386303   16652 system_pods.go:89] "coredns-5dd5756b68-btn4f" [1a65ce1f-1502-4afb-9739-3ff39aa260e7] Running
	I1024 19:02:36.386311   16652 system_pods.go:89] "csi-hostpath-attacher-0" [b79df6c1-4d3c-4ca3-9ad0-d832297c94c9] Running
	I1024 19:02:36.386319   16652 system_pods.go:89] "csi-hostpath-resizer-0" [83c0bd57-8a4c-438a-b200-5b32f8e2c490] Running
	I1024 19:02:36.386330   16652 system_pods.go:89] "csi-hostpathplugin-2x7pp" [413ba041-ddcd-4b11-8908-3fbaaf9f9128] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1024 19:02:36.386339   16652 system_pods.go:89] "etcd-addons-866342" [7b00fcf3-3c2d-4fbf-90d8-f67cc1775321] Running
	I1024 19:02:36.386346   16652 system_pods.go:89] "kube-apiserver-addons-866342" [74168ee9-8de6-40b7-b5f6-f5df5a682a6f] Running
	I1024 19:02:36.386354   16652 system_pods.go:89] "kube-controller-manager-addons-866342" [43cfb66d-8302-46f0-9dcc-4f33a6f205ce] Running
	I1024 19:02:36.386365   16652 system_pods.go:89] "kube-ingress-dns-minikube" [5d55372e-c8e4-4e55-b251-9dad4fad9890] Running
	I1024 19:02:36.386380   16652 system_pods.go:89] "kube-proxy-hz7fb" [cd6d9bae-e261-4141-9430-b0bfaf748547] Running
	I1024 19:02:36.386385   16652 system_pods.go:89] "kube-scheduler-addons-866342" [84855ad7-d7ae-469a-b5cc-d6bff4f4d483] Running
	I1024 19:02:36.386391   16652 system_pods.go:89] "metrics-server-7c66d45ddc-r2sdc" [216942df-99c1-4c92-b8bd-f0594dbb6894] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:02:36.386401   16652 system_pods.go:89] "nvidia-device-plugin-daemonset-kcrfw" [56d67427-465c-406a-a425-3ded489815e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1024 19:02:36.386410   16652 system_pods.go:89] "registry-9fjkv" [16c9f9e1-0151-4045-bb71-6e31267e58df] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1024 19:02:36.386423   16652 system_pods.go:89] "registry-proxy-8jqwg" [bd54e9d3-a6ec-43ec-910e-38ddb0de2574] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1024 19:02:36.386430   16652 system_pods.go:89] "snapshot-controller-58dbcc7b99-5hc9g" [68ab6123-ccb9-4af7-aa9d-dc523a62522a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:36.386436   16652 system_pods.go:89] "snapshot-controller-58dbcc7b99-gdslt" [4ba3a215-6f34-45d8-90ab-e2823003d8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:36.386443   16652 system_pods.go:89] "storage-provisioner" [e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25] Running
	I1024 19:02:36.386449   16652 system_pods.go:89] "tiller-deploy-7b677967b9-mzrhm" [3653bdf1-8b0f-4839-abe0-48a7faadeb74] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1024 19:02:36.386457   16652 system_pods.go:126] duration metric: took 208.378217ms to wait for k8s-apps to be running ...
	I1024 19:02:36.386467   16652 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:02:36.386518   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:02:36.418215   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:36.421928   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:36.423932   16652 system_svc.go:56] duration metric: took 37.457738ms WaitForService to wait for kubelet.
	I1024 19:02:36.423955   16652 kubeadm.go:581] duration metric: took 42.979791904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:02:36.423976   16652 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:02:36.536293   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:36.577172   16652 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:02:36.577210   16652 node_conditions.go:123] node cpu capacity is 2
	I1024 19:02:36.577227   16652 node_conditions.go:105] duration metric: took 153.243697ms to run NodePressure ...
	I1024 19:02:36.577240   16652 start.go:228] waiting for startup goroutines ...
	I1024 19:02:36.714402   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:36.917780   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:36.926665   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:37.036889   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:37.212776   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:37.418667   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:37.422752   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:37.537868   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:37.713726   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:37.918271   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:37.924942   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:38.051853   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:38.214497   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:38.417813   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.427548   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:38.538166   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:38.713592   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:38.918374   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.922554   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.037200   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:39.213712   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:39.675961   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:39.688465   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.695079   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:39.721448   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:39.918090   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:39.923736   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:40.036092   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:40.213916   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:40.418743   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.423471   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:40.546164   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:40.713035   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:40.917987   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.923506   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.037053   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:41.212886   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:41.418188   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:41.423095   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.536576   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:41.713868   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:41.966197   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.970098   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.037368   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:42.216305   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:42.427089   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:42.434357   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.539816   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:42.712698   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:42.921310   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.930358   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:43.041686   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:43.217962   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:43.421238   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:43.437048   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:43.540548   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:43.748146   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:43.921235   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:43.929090   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:44.036200   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:44.213309   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:44.418391   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:44.422633   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:44.535391   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:44.713732   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:44.918673   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:44.922846   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:45.040012   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:45.213397   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:45.419257   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:45.424171   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:45.538727   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:45.732256   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:45.922084   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:45.931229   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:46.039706   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:46.215376   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:46.420682   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:46.426421   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:46.537729   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:46.713543   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:46.918078   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:46.923807   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.035589   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:47.213955   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:47.420698   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:47.424302   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.547876   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:47.714275   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:47.917177   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:47.924169   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:48.036452   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:48.213946   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:48.418736   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:48.424966   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:48.535710   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:48.712784   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.200432   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.234130   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:49.234583   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:49.241466   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.417872   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.430379   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:49.537582   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:49.713974   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.917835   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.923848   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:50.035991   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:50.215873   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:50.422346   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:50.425613   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:50.536013   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:50.713117   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:50.918829   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:50.938113   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:51.061372   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:51.213876   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:51.420764   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:51.423973   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:51.538032   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:51.712957   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:51.918693   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:51.922954   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.262170   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:52.277774   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:52.420971   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:52.424809   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.536075   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:52.715513   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:52.917786   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:52.923215   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:53.037194   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:53.215066   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:53.418941   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.423076   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:53.539432   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:53.713611   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:53.917918   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.923926   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:54.035626   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:54.213934   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:54.419267   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:54.422105   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:54.536508   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:54.713503   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:54.919281   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:54.924619   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:55.037805   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:55.213794   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:55.421454   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:55.423781   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:55.542018   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:55.714798   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:55.918002   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:55.922425   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:56.053942   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:56.213864   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:56.419032   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:56.422356   16652 kapi.go:107] duration metric: took 54.563193433s to wait for kubernetes.io/minikube-addons=registry ...
	I1024 19:02:56.536473   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:56.713806   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:56.918175   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:57.040396   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:57.213662   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:57.418034   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:57.539708   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:57.714583   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:57.918341   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:58.036848   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:58.229433   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:58.421924   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:58.536871   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:58.714587   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:58.917702   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:59.053041   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:59.213286   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:59.422874   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:59.538558   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:59.714203   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:59.918291   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:00.039776   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:00.218290   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:00.418075   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:00.535802   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:00.715108   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:00.920184   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:01.046891   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:01.213360   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:01.418725   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:01.537912   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:01.713169   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:01.918253   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:02.037494   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:02.213073   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:02.422576   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:02.537430   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:02.712857   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:02.918457   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:03.043847   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:03.213681   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.495516   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:03.548869   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:03.713737   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.917682   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:04.036631   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:04.215949   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:04.427444   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:04.537751   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:04.713854   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:04.917459   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:05.036863   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:05.214829   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:05.418276   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:05.536067   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:05.714762   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:05.991126   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:06.040568   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:06.213387   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:06.418294   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:06.536385   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:06.714137   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:06.918268   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:07.035869   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:07.214541   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:07.424186   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:07.536378   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:07.714434   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:07.919043   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:08.038159   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:08.215308   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:08.417821   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:08.537153   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:08.712992   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:08.918106   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:09.037073   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:09.220768   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:09.428442   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:09.554486   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:09.717677   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:09.918819   16652 kapi.go:107] duration metric: took 1m8.063251732s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1024 19:03:10.039349   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:10.213065   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:10.539937   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:10.723542   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:11.036629   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:11.215904   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:11.536474   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:11.714782   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:12.036547   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:12.214129   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:12.665187   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:12.733269   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:13.039766   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:13.223008   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:13.538492   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:13.713970   16652 kapi.go:107] duration metric: took 1m8.593057784s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1024 19:03:13.715591   16652 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-866342 cluster.
	I1024 19:03:13.716892   16652 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1024 19:03:13.718207   16652 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1024 19:03:14.036295   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:14.547611   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:15.037623   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:15.537008   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:16.050170   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:16.536127   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:17.037217   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:17.536392   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:18.036196   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:18.537571   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:19.040535   16652 kapi.go:107] duration metric: took 1m16.346930681s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1024 19:03:19.042498   16652 out.go:177] * Enabled addons: ingress-dns, metrics-server, inspektor-gadget, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1024 19:03:19.043944   16652 addons.go:502] enable addons completed in 1m25.886460189s: enabled=[ingress-dns metrics-server inspektor-gadget nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1024 19:03:19.043978   16652 start.go:233] waiting for cluster config update ...
	I1024 19:03:19.043998   16652 start.go:242] writing updated cluster config ...
	I1024 19:03:19.044225   16652 ssh_runner.go:195] Run: rm -f paused
	I1024 19:03:19.092205   16652 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:03:19.094062   16652 out.go:177] * Done! kubectl is now configured to use "addons-866342" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:01:06 UTC, ends at Tue 2023-10-24 19:06:14 UTC. --
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.269177111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b92eabcd-b1c2-414d-9cd0-5572a160fe77 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.270428047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e4431164-afd2-4257-b732-4ce11acf6b23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.271581663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698174374271566126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529561,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=e4431164-afd2-4257-b732-4ce11acf6b23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.272228684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4fce5aae-87a5-4736-bb5f-60d6add3750c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.272351478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4fce5aae-87a5-4736-bb5f-60d6add3750c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.272697956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f29f204f4ebd17719e60b853f55d7168970e2b71096b34033339843c2bd6d8ec,PodSandboxId:aea95cd3bc24f88b4867e4b41ae971f534a33d314e3bcb0f68c0f3e16b60bfca,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698174366231528385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wn6qs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b0a38e0-34fa-4451-b863-0a0bd0a3253a,},Annotations:map[string]string{io.kubernetes.container.hash: 944a9a11,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0926a62c3aee75230a944b590ca86852801441a418c1adb80c7959e3b41409f,PodSandboxId:cb4497b77170499160b4fd8067391a30d11e745a1a149528d43778486a22753c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698174232507405550,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-28p64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 4ac99cca-4bd3-4726-92f0-0a693caf1c3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1debd86a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d7e9d0b879a70515587070731d1aa9d7db994c75d0fa165f541f819fdf6056,PodSandboxId:40e7a2c74613d3c7ac035278e7c4547e6c6ae2a51ca85cc2732d6cb2a6dbfa06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174226000441401,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: f5245aa2-39c0-4f7b-917a-28296885d357,},Annotations:map[string]string{io.kubernetes.container.hash: f16ad710,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253
677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageR
ef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee3
85,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a96026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2931
8c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c9985932b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghc
r.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b06
07bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e
33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Met
adata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0
e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f2
1eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middlew
are/chain.go:25" id=4fce5aae-87a5-4736-bb5f-60d6add3750c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.306447382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=38dd3aa2-63ba-4fba-ac9b-9bf71d86c78e name=/runtime.v1.RuntimeService/Version
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.306551436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=38dd3aa2-63ba-4fba-ac9b-9bf71d86c78e name=/runtime.v1.RuntimeService/Version
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.308061317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=24030061-60af-4837-9bb3-4a00a458b25f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.309372277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698174374309351935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529561,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=24030061-60af-4837-9bb3-4a00a458b25f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.309944362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f29e5450-5647-4092-a9a3-68c29f226ad8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.309992496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f29e5450-5647-4092-a9a3-68c29f226ad8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.310404582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f29f204f4ebd17719e60b853f55d7168970e2b71096b34033339843c2bd6d8ec,PodSandboxId:aea95cd3bc24f88b4867e4b41ae971f534a33d314e3bcb0f68c0f3e16b60bfca,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698174366231528385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wn6qs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b0a38e0-34fa-4451-b863-0a0bd0a3253a,},Annotations:map[string]string{io.kubernetes.container.hash: 944a9a11,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0926a62c3aee75230a944b590ca86852801441a418c1adb80c7959e3b41409f,PodSandboxId:cb4497b77170499160b4fd8067391a30d11e745a1a149528d43778486a22753c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698174232507405550,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-28p64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 4ac99cca-4bd3-4726-92f0-0a693caf1c3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1debd86a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d7e9d0b879a70515587070731d1aa9d7db994c75d0fa165f541f819fdf6056,PodSandboxId:40e7a2c74613d3c7ac035278e7c4547e6c6ae2a51ca85cc2732d6cb2a6dbfa06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174226000441401,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: f5245aa2-39c0-4f7b-917a-28296885d357,},Annotations:map[string]string{io.kubernetes.container.hash: f16ad710,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253
677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageR
ef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee3
85,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a96026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2931
8c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c9985932b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghc
r.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b06
07bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e
33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Met
adata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0
e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f2
1eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middlew
are/chain.go:25" id=f29e5450-5647-4092-a9a3-68c29f226ad8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.347041362Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=8305f6ba-9478-4ab0-9bde-3e23d074dd6a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.347459291Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:aea95cd3bc24f88b4867e4b41ae971f534a33d314e3bcb0f68c0f3e16b60bfca,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-wn6qs,Uid:8b0a38e0-34fa-4451-b863-0a0bd0a3253a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174363681915163,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-wn6qs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b0a38e0-34fa-4451-b863-0a0bd0a3253a,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:06:03.334429874Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb4497b77170499160b4fd8067391a30d11e745a1a149528d43778486a22753c,Metadata:&PodSandboxMetadata{Name:headlamp-94b766c-28p64,Uid:4ac99cca-4bd3-4726-92f0-0a693caf1c3d,Namespace:he
adlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174226871823885,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-94b766c-28p64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 4ac99cca-4bd3-4726-92f0-0a693caf1c3d,pod-template-hash: 94b766c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:03:46.540025909Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40e7a2c74613d3c7ac035278e7c4547e6c6ae2a51ca85cc2732d6cb2a6dbfa06,Metadata:&PodSandboxMetadata{Name:nginx,Uid:f5245aa2-39c0-4f7b-917a-28296885d357,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174222505466290,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5245aa2-39c0-4f7b-917a-28296885d357,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:
03:42.172095034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-rflxx,Uid:bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174189271688076,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:02:05.037393448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06ead9376c4df0c511f0a3e1017d5323808175d8109dee01b1ee364b0a785757,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-6f48fc54bd-vvrbc,Uid:85a4a208-cb9e-4c26-8a4f-f939c08527d3,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Creat
edAt:1698174153872149777,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-6f48fc54bd-vvrbc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85a4a208-cb9e-4c26-8a4f-f939c08527d3,pod-template-hash: 6f48fc54bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:02:01.739688659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&PodSandboxMetadata{Name:gadget-c4f8q,Uid:c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174123055752972,Labels:map[string]string{controller-revision-hash: 5cd5fc5965,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84
d-4c73-b4b5-ab373ec3b9c9,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,inspektor-gadget.kinvolk.io/option-hook-mode: auto,kubernetes.io/config.seen: 2023-10-24T19:02:00.900000065Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-cpn5m,Uid:db328688-a88c-432e-88bc-3b2a4d39eded,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698174122222816073,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: a86f957f-728b-41d6-aff8-a33607483dfe,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: a86f957f-728b-41d6-aff8-a33607483dfe,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admiss
ion-patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:02:01.814611417Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08d6d1178815e0041733ec4254a7a96026aa797f7973538d774f099c6634af60,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-zp2f5,Uid:77b5c409-9bd7-4af3-bb7f-cc9c167c8911,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698174122143077601,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 2166509f-1f9d-4bae-80dc-3177bfa908cb,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 2166509f-1f9d-4bae-80dc-3177bfa908cb,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kube
rnetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:02:01.807530065Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174120727539380,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode
\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-24T19:02:00.374766593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&PodSandboxMetadata{Name:tiller-deploy-7b677967b9-mzrhm,Uid:3653bdf1-8b0f-4839-abe0-48a7faadeb74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174120451679483,Labels:map[string]string{app: helm,io.kubernetes.container.name: POD,io.kubernetes.pod.name: tiller
-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,name: tiller,pod-template-hash: 7b677967b9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:01:59.779486895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:115bf7b2f0e0fcaf1dc7a2e56b9ac0025d1cd6ee27c9b3b009f9af3caa475ae2,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:5d55372e-c8e4-4e55-b251-9dad4fad9890,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1698174120065969972,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55372e-c8e4-4e55-b251-9dad4fad9890,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"
app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-10-24T19:01:59.432488304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-btn4f,Uid:1a65ce1f-1502-4afb-9739-3ff39aa260e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174115351110802,Labe
ls:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:01:54.116103720Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a108fe980cae30ea05622d76016d91c5e33da98fceea93a57e17b89a66880e24,Metadata:&PodSandboxMetadata{Name:kube-proxy-hz7fb,Uid:cd6d9bae-e261-4141-9430-b0bfaf748547,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174114940343521,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:01:53.097784723
Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-866342,Uid:54266d083ecdf4ecb5e305fb10b9988a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174092628760918,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 54266d083ecdf4ecb5e305fb10b9988a,kubernetes.io/config.seen: 2023-10-24T19:01:32.062487251Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-866342,Uid:0f32b306985ba0b85e27281d251aa310,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1698174092613695979,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f32b306985ba0b85e27281d251aa310,kubernetes.io/config.seen: 2023-10-24T19:01:32.062488252Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-866342,Uid:6bc5ece5f95f40c2404b74a679745064,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174092580969952,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,tier: control-plane,}
,Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.163:8443,kubernetes.io/config.hash: 6bc5ece5f95f40c2404b74a679745064,kubernetes.io/config.seen: 2023-10-24T19:01:32.062486174Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&PodSandboxMetadata{Name:etcd-addons-866342,Uid:bc1f72940e24a91b7afecd058f85cf6c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698174092566581449,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.163:2379,kubernetes.io/config.hash: bc1f72940e24a91b7afecd058f85cf6c,kubernetes.io/config.seen: 2023-10-24T19:01:32.062483046Z,kubernetes.io/confi
g.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8305f6ba-9478-4ab0-9bde-3e23d074dd6a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.348371829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=95eda7ab-f4fe-4ec5-843f-417ce4b2d847 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.348472522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=95eda7ab-f4fe-4ec5-843f-417ce4b2d847 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.348981389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f29f204f4ebd17719e60b853f55d7168970e2b71096b34033339843c2bd6d8ec,PodSandboxId:aea95cd3bc24f88b4867e4b41ae971f534a33d314e3bcb0f68c0f3e16b60bfca,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698174366231528385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wn6qs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b0a38e0-34fa-4451-b863-0a0bd0a3253a,},Annotations:map[string]string{io.kubernetes.container.hash: 944a9a11,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0926a62c3aee75230a944b590ca86852801441a418c1adb80c7959e3b41409f,PodSandboxId:cb4497b77170499160b4fd8067391a30d11e745a1a149528d43778486a22753c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698174232507405550,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-28p64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 4ac99cca-4bd3-4726-92f0-0a693caf1c3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1debd86a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d7e9d0b879a70515587070731d1aa9d7db994c75d0fa165f541f819fdf6056,PodSandboxId:40e7a2c74613d3c7ac035278e7c4547e6c6ae2a51ca85cc2732d6cb2a6dbfa06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174226000441401,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: f5245aa2-39c0-4f7b-917a-28296885d357,},Annotations:map[string]string{io.kubernetes.container.hash: f16ad710,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253
677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageR
ef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee3
85,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a96026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2931
8c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c9985932b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghc
r.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b06
07bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e
33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Met
adata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0
e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f2
1eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middlew
are/chain.go:25" id=95eda7ab-f4fe-4ec5-843f-417ce4b2d847 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.357530612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7280925b-992e-4566-8046-759ad18480cd name=/runtime.v1.RuntimeService/Version
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.357615361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7280925b-992e-4566-8046-759ad18480cd name=/runtime.v1.RuntimeService/Version
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.364155915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4c7568a8-5c82-4673-8c04-b31f581d3357 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.365548436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698174374365533559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529561,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=4c7568a8-5c82-4673-8c04-b31f581d3357 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.366012233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=390bb9f8-da87-4102-9b27-7ebf0f7d8f90 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.366053732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=390bb9f8-da87-4102-9b27-7ebf0f7d8f90 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:06:14 addons-866342 crio[717]: time="2023-10-24 19:06:14.366487320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f29f204f4ebd17719e60b853f55d7168970e2b71096b34033339843c2bd6d8ec,PodSandboxId:aea95cd3bc24f88b4867e4b41ae971f534a33d314e3bcb0f68c0f3e16b60bfca,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698174366231528385,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-wn6qs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b0a38e0-34fa-4451-b863-0a0bd0a3253a,},Annotations:map[string]string{io.kubernetes.container.hash: 944a9a11,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0926a62c3aee75230a944b590ca86852801441a418c1adb80c7959e3b41409f,PodSandboxId:cb4497b77170499160b4fd8067391a30d11e745a1a149528d43778486a22753c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1698174232507405550,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-28p64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 4ac99cca-4bd3-4726-92f0-0a693caf1c3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1debd86a,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d7e9d0b879a70515587070731d1aa9d7db994c75d0fa165f541f819fdf6056,PodSandboxId:40e7a2c74613d3c7ac035278e7c4547e6c6ae2a51ca85cc2732d6cb2a6dbfa06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174226000441401,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io
.kubernetes.pod.uid: f5245aa2-39c0-4f7b-917a-28296885d357,},Annotations:map[string]string{io.kubernetes.container.hash: f16ad710,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253
677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageR
ef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee3
85,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a96026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2931
8c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c9985932b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghc
r.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b06
07bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e
33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Met
adata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0
e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f2
1eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middlew
are/chain.go:25" id=390bb9f8-da87-4102-9b27-7ebf0f7d8f90 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f29f204f4ebd1       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6                      8 seconds ago       Running             hello-world-app           0                   aea95cd3bc24f       hello-world-app-5d77478584-wn6qs
	c0926a62c3aee       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   cb4497b771704       headlamp-94b766c-28p64
	38d7e9d0b879a       docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf                              2 minutes ago       Running             nginx                     0                   40e7a2c74613d       nginx
	06d5425187cdc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   87cf6ad227150       gcp-auth-d4c87556c-rflxx
	63f3bc75d05ef       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  3 minutes ago       Running             tiller                    0                   46925b2fe4bed       tiller-deploy-7b677967b9-mzrhm
	aef8fe13203e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   80893d3050676       storage-provisioner
	8b0ea7a05d51a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   0e02be4f58dec       ingress-nginx-admission-patch-cpn5m
	7871dfa836717       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   08d6d1178815e       ingress-nginx-admission-create-zp2f5
	24d3a3fb0099c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21            3 minutes ago       Running             gadget                    0                   36a21adc3933a       gadget-c4f8q
	d53a67d397330       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   80893d3050676       storage-provisioner
	3253511e1c28e       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   a108fe980cae3       kube-proxy-hz7fb
	4d7766ee3fd77       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   801b7e41ebced       coredns-5dd5756b68-btn4f
	936bd9cee2edd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   9c0595b62da78       etcd-addons-866342
	fe0e879fa8539       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   e59a8c40c1a60       kube-scheduler-addons-866342
	8aac672c812a4       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   c6314b2b790d4       kube-controller-manager-addons-866342
	2684a464c22a8       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   88a89655a2dbc       kube-apiserver-addons-866342
	
	* 
	* ==> coredns [4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5] <==
	* [INFO] 10.244.0.6:45339 - 51654 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156054s
	[INFO] 10.244.0.6:40781 - 28545 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076049s
	[INFO] 10.244.0.6:40781 - 38141 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.001674223s
	[INFO] 10.244.0.6:34074 - 33831 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060764s
	[INFO] 10.244.0.6:34074 - 65238 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062505s
	[INFO] 10.244.0.6:37006 - 11354 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076454s
	[INFO] 10.244.0.6:37006 - 52568 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072965s
	[INFO] 10.244.0.6:36559 - 54795 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095109s
	[INFO] 10.244.0.6:36559 - 53511 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000064423s
	[INFO] 10.244.0.6:50015 - 35787 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000196331s
	[INFO] 10.244.0.6:50015 - 8905 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169411s
	[INFO] 10.244.0.6:47951 - 11934 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045869s
	[INFO] 10.244.0.6:47951 - 9884 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098304s
	[INFO] 10.244.0.6:60450 - 38153 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074006s
	[INFO] 10.244.0.6:60450 - 21271 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063887s
	[INFO] 10.244.0.20:52467 - 34449 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460102s
	[INFO] 10.244.0.20:58297 - 179 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159392s
	[INFO] 10.244.0.20:39044 - 53863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138179s
	[INFO] 10.244.0.20:55215 - 12268 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108806s
	[INFO] 10.244.0.20:42378 - 53725 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000303568s
	[INFO] 10.244.0.20:52652 - 24410 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000746993s
	[INFO] 10.244.0.20:37984 - 37086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00060007s
	[INFO] 10.244.0.20:50796 - 23696 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000731589s
	[INFO] 10.244.0.23:38343 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190314s
	[INFO] 10.244.0.23:44423 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094011s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-866342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-866342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=addons-866342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_01_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-866342
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:01:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-866342
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:06:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:04:14 +0000   Tue, 24 Oct 2023 19:01:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:04:14 +0000   Tue, 24 Oct 2023 19:01:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:04:14 +0000   Tue, 24 Oct 2023 19:01:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:04:14 +0000   Tue, 24 Oct 2023 19:01:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    addons-866342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 2799a64041ca4d8881b5d53fbd221f45
	  System UUID:                2799a640-41ca-4d88-81b5-d53fbd221f45
	  Boot ID:                    e72a99ec-72e9-4002-ab8a-b128d71c8bda
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-wn6qs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gadget                      gadget-c4f8q                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  gcp-auth                    gcp-auth-d4c87556c-rflxx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  headlamp                    headlamp-94b766c-28p64                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 coredns-5dd5756b68-btn4f                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m21s
	  kube-system                 etcd-addons-866342                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-apiserver-addons-866342             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-controller-manager-addons-866342    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-proxy-hz7fb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-scheduler-addons-866342             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 tiller-deploy-7b677967b9-mzrhm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node addons-866342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node addons-866342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x7 over 4m42s)  kubelet          Node addons-866342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m33s                  kubelet          Node addons-866342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s                  kubelet          Node addons-866342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s                  kubelet          Node addons-866342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m33s                  kubelet          Node addons-866342 status is now: NodeReady
	  Normal  RegisteredNode           4m22s                  node-controller  Node addons-866342 event: Registered Node addons-866342 in Controller
	
	* 
	* ==> dmesg <==
	* [  +3.463431] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150613] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.058990] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.975190] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.102388] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.134915] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.114699] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.214536] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.225259] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.254807] systemd-fstab-generator[1245]: Ignoring "noauto" for root device
	[ +19.696121] kauditd_printk_skb: 10 callbacks suppressed
	[Oct24 19:02] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.017627] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.514980] kauditd_printk_skb: 4 callbacks suppressed
	[ +15.045801] kauditd_printk_skb: 18 callbacks suppressed
	[Oct24 19:03] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.005024] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.005027] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.537255] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.645094] kauditd_printk_skb: 12 callbacks suppressed
	[Oct24 19:04] kauditd_printk_skb: 12 callbacks suppressed
	[Oct24 19:06] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af] <==
	* {"level":"warn","ts":"2023-10-24T19:02:49.186268Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T19:02:48.873586Z","time spent":"312.627543ms","remote":"127.0.0.1:49192","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":826,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-patch-cjr4z.17912068c59cea20\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-certs-patch-cjr4z.17912068c59cea20\" value_size:739 lease:5689325486867454225 >> failure:<>"}
	{"level":"warn","ts":"2023-10-24T19:02:49.187407Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.51914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2023-10-24T19:02:49.187502Z","caller":"traceutil/trace.go:171","msg":"trace[1039726344] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:999; }","duration":"276.74587ms","start":"2023-10-24T19:02:48.910746Z","end":"2023-10-24T19:02:49.187492Z","steps":["trace[1039726344] 'agreement among raft nodes before linearized reading'  (duration: 275.811698ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.199009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.96165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82237"}
	{"level":"info","ts":"2023-10-24T19:02:49.199098Z","caller":"traceutil/trace.go:171","msg":"trace[1078243644] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1000; }","duration":"171.058753ms","start":"2023-10-24T19:02:49.02803Z","end":"2023-10-24T19:02:49.199089Z","steps":["trace[1078243644] 'agreement among raft nodes before linearized reading'  (duration: 165.698624ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:02:49.20771Z","caller":"traceutil/trace.go:171","msg":"trace[963569469] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"256.477498ms","start":"2023-10-24T19:02:48.951217Z","end":"2023-10-24T19:02:49.207694Z","steps":["trace[963569469] 'process raft request'  (duration: 236.86989ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.209715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.57384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82237"}
	{"level":"info","ts":"2023-10-24T19:02:49.209774Z","caller":"traceutil/trace.go:171","msg":"trace[1723567840] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1000; }","duration":"258.631845ms","start":"2023-10-24T19:02:48.951129Z","end":"2023-10-24T19:02:49.209761Z","steps":["trace[1723567840] 'agreement among raft nodes before linearized reading'  (duration: 238.132414ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.209246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.645506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:02:49.209983Z","caller":"traceutil/trace.go:171","msg":"trace[1453851684] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1000; }","duration":"248.393426ms","start":"2023-10-24T19:02:48.961582Z","end":"2023-10-24T19:02:49.209976Z","steps":["trace[1453851684] 'agreement among raft nodes before linearized reading'  (duration: 247.592324ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:02:52.255475Z","caller":"traceutil/trace.go:171","msg":"trace[1383206217] linearizableReadLoop","detail":"{readStateIndex:1053; appliedIndex:1052; }","duration":"226.249042ms","start":"2023-10-24T19:02:52.029212Z","end":"2023-10-24T19:02:52.255461Z","steps":["trace[1383206217] 'read index received'  (duration: 226.027303ms)","trace[1383206217] 'applied index is now lower than readState.Index'  (duration: 221.248µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:02:52.255645Z","caller":"traceutil/trace.go:171","msg":"trace[1250206044] transaction","detail":"{read_only:false; response_revision:1022; number_of_response:1; }","duration":"258.403206ms","start":"2023-10-24T19:02:51.997228Z","end":"2023-10-24T19:02:52.255631Z","steps":["trace[1250206044] 'process raft request'  (duration: 258.111178ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:52.255817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.705404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-24T19:02:52.257918Z","caller":"traceutil/trace.go:171","msg":"trace[2028574549] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1022; }","duration":"193.81294ms","start":"2023-10-24T19:02:52.064093Z","end":"2023-10-24T19:02:52.257906Z","steps":["trace[2028574549] 'agreement among raft nodes before linearized reading'  (duration: 191.679534ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:52.256187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.986748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82322"}
	{"level":"info","ts":"2023-10-24T19:02:52.258081Z","caller":"traceutil/trace.go:171","msg":"trace[338915624] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1022; }","duration":"228.890913ms","start":"2023-10-24T19:02:52.029183Z","end":"2023-10-24T19:02:52.258074Z","steps":["trace[338915624] 'agreement among raft nodes before linearized reading'  (duration: 226.825858ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:05.983702Z","caller":"traceutil/trace.go:171","msg":"trace[1268068637] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"106.772805ms","start":"2023-10-24T19:03:05.876916Z","end":"2023-10-24T19:03:05.983689Z","steps":["trace[1268068637] 'process raft request'  (duration: 106.349737ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:12.652441Z","caller":"traceutil/trace.go:171","msg":"trace[400807164] linearizableReadLoop","detail":"{readStateIndex:1145; appliedIndex:1144; }","duration":"141.056459ms","start":"2023-10-24T19:03:12.511372Z","end":"2023-10-24T19:03:12.652428Z","steps":["trace[400807164] 'read index received'  (duration: 140.781066ms)","trace[400807164] 'applied index is now lower than readState.Index'  (duration: 274.892µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:03:12.652683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.404305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-24T19:03:12.652938Z","caller":"traceutil/trace.go:171","msg":"trace[381011134] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1110; }","duration":"141.669387ms","start":"2023-10-24T19:03:12.511257Z","end":"2023-10-24T19:03:12.652926Z","steps":["trace[381011134] 'agreement among raft nodes before linearized reading'  (duration: 141.361002ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:03:12.653253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.44824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82490"}
	{"level":"info","ts":"2023-10-24T19:03:12.652774Z","caller":"traceutil/trace.go:171","msg":"trace[663327054] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"379.329217ms","start":"2023-10-24T19:03:12.273383Z","end":"2023-10-24T19:03:12.652713Z","steps":["trace[663327054] 'process raft request'  (duration: 378.812226ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:03:12.653548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T19:03:12.273368Z","time spent":"380.062475ms","remote":"127.0.0.1:49214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7530,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/addons-866342\" mod_revision:934 > success:<request_put:<key:\"/registry/minions/addons-866342\" value_size:7491 >> failure:<request_range:<key:\"/registry/minions/addons-866342\" > >"}
	{"level":"info","ts":"2023-10-24T19:03:12.653395Z","caller":"traceutil/trace.go:171","msg":"trace[471025193] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1110; }","duration":"124.651819ms","start":"2023-10-24T19:03:12.528734Z","end":"2023-10-24T19:03:12.653386Z","steps":["trace[471025193] 'agreement among raft nodes before linearized reading'  (duration: 124.346617ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:51.172529Z","caller":"traceutil/trace.go:171","msg":"trace[376606193] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"104.330538ms","start":"2023-10-24T19:03:51.068169Z","end":"2023-10-24T19:03:51.1725Z","steps":["trace[376606193] 'process raft request'  (duration: 104.247725ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13] <==
	* 2023/10/24 19:03:13 GCP Auth Webhook started!
	2023/10/24 19:03:19 Ready to marshal response ...
	2023/10/24 19:03:19 Ready to write response ...
	2023/10/24 19:03:19 Ready to marshal response ...
	2023/10/24 19:03:19 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:42 Ready to marshal response ...
	2023/10/24 19:03:42 Ready to write response ...
	2023/10/24 19:03:46 Ready to marshal response ...
	2023/10/24 19:03:46 Ready to write response ...
	2023/10/24 19:03:46 Ready to marshal response ...
	2023/10/24 19:03:46 Ready to write response ...
	2023/10/24 19:03:46 Ready to marshal response ...
	2023/10/24 19:03:46 Ready to write response ...
	2023/10/24 19:03:52 Ready to marshal response ...
	2023/10/24 19:03:52 Ready to write response ...
	2023/10/24 19:04:28 Ready to marshal response ...
	2023/10/24 19:04:28 Ready to write response ...
	2023/10/24 19:06:03 Ready to marshal response ...
	2023/10/24 19:06:03 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:06:14 up 5 min,  0 users,  load average: 1.29, 1.86, 0.96
	Linux addons-866342 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13] <==
	* I1024 19:03:44.686122       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1024 19:03:45.723941       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1024 19:03:46.448136       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.165.34"}
	I1024 19:04:06.299034       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1024 19:04:45.619022       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.619180       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.634895       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.635000       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.649060       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.649134       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.660844       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.660929       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.671253       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.671418       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.688456       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.688532       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.688577       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.688613       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:45.704149       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:45.706406       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1024 19:04:46.672020       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1024 19:04:46.692065       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1024 19:04:46.724212       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1024 19:06:03.540013       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.48.85"}
	E1024 19:06:06.478075       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [8aac672c812a42e3f21eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9] <==
	* W1024 19:05:08.038388       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:08.038501       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:22.521740       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:22.521798       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:22.993942       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:22.993973       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:25.726216       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:25.726422       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:51.772477       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:51.772557       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:59.057416       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:59.058015       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1024 19:06:03.278866       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1024 19:06:03.320776       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-wn6qs"
	I1024 19:06:03.334516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.708879ms"
	I1024 19:06:03.351060       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.470539ms"
	I1024 19:06:03.369589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.461043ms"
	I1024 19:06:03.369728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.462µs"
	I1024 19:06:06.345184       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1024 19:06:06.350949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="8.65µs"
	I1024 19:06:06.359962       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1024 19:06:07.346063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.335055ms"
	I1024 19:06:07.346577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="119.155µs"
	W1024 19:06:12.239575       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:06:12.239637       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40] <==
	* I1024 19:02:08.771158       1 server_others.go:69] "Using iptables proxy"
	I1024 19:02:08.940201       1 node.go:141] Successfully retrieved node IP: 192.168.39.163
	I1024 19:02:09.671563       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 19:02:09.671615       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 19:02:09.854441       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:02:09.854548       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:02:09.854718       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:02:09.854728       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:02:09.882402       1 config.go:188] "Starting service config controller"
	I1024 19:02:09.884531       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:02:09.884574       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:02:09.884591       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:02:09.894621       1 config.go:315] "Starting node config controller"
	I1024 19:02:09.894809       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:02:10.204722       1 shared_informer.go:318] Caches are synced for node config
	I1024 19:02:10.272414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:02:10.277064       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fe0e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a] <==
	* E1024 19:01:37.640746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:01:37.640018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1024 19:01:37.640132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:01:37.640228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:37.640381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:01:37.640495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:01:37.640628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:37.640634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:01:38.491343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:38.491398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1024 19:01:38.635616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:01:38.635764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:01:38.745116       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:01:38.745206       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:01:38.780590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:01:38.780678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:01:38.783983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:01:38.784017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1024 19:01:38.810383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:01:38.810468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 19:01:38.846548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:01:38.846648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:01:38.852481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:01:38.852568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1024 19:01:41.807479       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:01:06 UTC, ends at Tue 2023-10-24 19:06:14 UTC. --
	Oct 24 19:06:03 addons-866342 kubelet[1252]: I1024 19:06:03.335049    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="68ab6123-ccb9-4af7-aa9d-dc523a62522a" containerName="volume-snapshot-controller"
	Oct 24 19:06:03 addons-866342 kubelet[1252]: I1024 19:06:03.345598    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b0a38e0-34fa-4451-b863-0a0bd0a3253a-gcp-creds\") pod \"hello-world-app-5d77478584-wn6qs\" (UID: \"8b0a38e0-34fa-4451-b863-0a0bd0a3253a\") " pod="default/hello-world-app-5d77478584-wn6qs"
	Oct 24 19:06:03 addons-866342 kubelet[1252]: I1024 19:06:03.345633    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65phr\" (UniqueName: \"kubernetes.io/projected/8b0a38e0-34fa-4451-b863-0a0bd0a3253a-kube-api-access-65phr\") pod \"hello-world-app-5d77478584-wn6qs\" (UID: \"8b0a38e0-34fa-4451-b863-0a0bd0a3253a\") " pod="default/hello-world-app-5d77478584-wn6qs"
	Oct 24 19:06:04 addons-866342 kubelet[1252]: I1024 19:06:04.756718    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8zkm\" (UniqueName: \"kubernetes.io/projected/5d55372e-c8e4-4e55-b251-9dad4fad9890-kube-api-access-s8zkm\") pod \"5d55372e-c8e4-4e55-b251-9dad4fad9890\" (UID: \"5d55372e-c8e4-4e55-b251-9dad4fad9890\") "
	Oct 24 19:06:04 addons-866342 kubelet[1252]: I1024 19:06:04.759754    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d55372e-c8e4-4e55-b251-9dad4fad9890-kube-api-access-s8zkm" (OuterVolumeSpecName: "kube-api-access-s8zkm") pod "5d55372e-c8e4-4e55-b251-9dad4fad9890" (UID: "5d55372e-c8e4-4e55-b251-9dad4fad9890"). InnerVolumeSpecName "kube-api-access-s8zkm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 24 19:06:04 addons-866342 kubelet[1252]: I1024 19:06:04.857489    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s8zkm\" (UniqueName: \"kubernetes.io/projected/5d55372e-c8e4-4e55-b251-9dad4fad9890-kube-api-access-s8zkm\") on node \"addons-866342\" DevicePath \"\""
	Oct 24 19:06:05 addons-866342 kubelet[1252]: I1024 19:06:05.301691    1252 scope.go:117] "RemoveContainer" containerID="3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288"
	Oct 24 19:06:05 addons-866342 kubelet[1252]: I1024 19:06:05.389072    1252 scope.go:117] "RemoveContainer" containerID="3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288"
	Oct 24 19:06:05 addons-866342 kubelet[1252]: E1024 19:06:05.390040    1252 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288\": container with ID starting with 3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288 not found: ID does not exist" containerID="3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288"
	Oct 24 19:06:05 addons-866342 kubelet[1252]: I1024 19:06:05.390081    1252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288"} err="failed to get container status \"3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288\": rpc error: code = NotFound desc = could not find container \"3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288\": container with ID starting with 3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288 not found: ID does not exist"
	Oct 24 19:06:07 addons-866342 kubelet[1252]: I1024 19:06:07.058089    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5d55372e-c8e4-4e55-b251-9dad4fad9890" path="/var/lib/kubelet/pods/5d55372e-c8e4-4e55-b251-9dad4fad9890/volumes"
	Oct 24 19:06:07 addons-866342 kubelet[1252]: I1024 19:06:07.058533    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="77b5c409-9bd7-4af3-bb7f-cc9c167c8911" path="/var/lib/kubelet/pods/77b5c409-9bd7-4af3-bb7f-cc9c167c8911/volumes"
	Oct 24 19:06:07 addons-866342 kubelet[1252]: I1024 19:06:07.058928    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="db328688-a88c-432e-88bc-3b2a4d39eded" path="/var/lib/kubelet/pods/db328688-a88c-432e-88bc-3b2a4d39eded/volumes"
	Oct 24 19:06:07 addons-866342 kubelet[1252]: I1024 19:06:07.328138    1252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-wn6qs" podStartSLOduration=2.706894324 podCreationTimestamp="2023-10-24 19:06:03 +0000 UTC" firstStartedPulling="2023-10-24 19:06:04.586874635 +0000 UTC m=+263.684304359" lastFinishedPulling="2023-10-24 19:06:06.208060479 +0000 UTC m=+265.305490204" observedRunningTime="2023-10-24 19:06:07.32676389 +0000 UTC m=+266.424193633" watchObservedRunningTime="2023-10-24 19:06:07.328080169 +0000 UTC m=+266.425509912"
	Oct 24 19:06:09 addons-866342 kubelet[1252]: I1024 19:06:09.700564    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glr8g\" (UniqueName: \"kubernetes.io/projected/85a4a208-cb9e-4c26-8a4f-f939c08527d3-kube-api-access-glr8g\") pod \"85a4a208-cb9e-4c26-8a4f-f939c08527d3\" (UID: \"85a4a208-cb9e-4c26-8a4f-f939c08527d3\") "
	Oct 24 19:06:09 addons-866342 kubelet[1252]: I1024 19:06:09.701196    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85a4a208-cb9e-4c26-8a4f-f939c08527d3-webhook-cert\") pod \"85a4a208-cb9e-4c26-8a4f-f939c08527d3\" (UID: \"85a4a208-cb9e-4c26-8a4f-f939c08527d3\") "
	Oct 24 19:06:09 addons-866342 kubelet[1252]: I1024 19:06:09.703653    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a4a208-cb9e-4c26-8a4f-f939c08527d3-kube-api-access-glr8g" (OuterVolumeSpecName: "kube-api-access-glr8g") pod "85a4a208-cb9e-4c26-8a4f-f939c08527d3" (UID: "85a4a208-cb9e-4c26-8a4f-f939c08527d3"). InnerVolumeSpecName "kube-api-access-glr8g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 24 19:06:09 addons-866342 kubelet[1252]: I1024 19:06:09.706376    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a4a208-cb9e-4c26-8a4f-f939c08527d3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "85a4a208-cb9e-4c26-8a4f-f939c08527d3" (UID: "85a4a208-cb9e-4c26-8a4f-f939c08527d3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:06:09 addons-866342 kubelet[1252]: I1024 19:06:09.802009    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-glr8g\" (UniqueName: \"kubernetes.io/projected/85a4a208-cb9e-4c26-8a4f-f939c08527d3-kube-api-access-glr8g\") on node \"addons-866342\" DevicePath \"\""
	Oct 24 19:06:09 addons-866342 kubelet[1252]: I1024 19:06:09.802064    1252 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85a4a208-cb9e-4c26-8a4f-f939c08527d3-webhook-cert\") on node \"addons-866342\" DevicePath \"\""
	Oct 24 19:06:10 addons-866342 kubelet[1252]: I1024 19:06:10.330843    1252 scope.go:117] "RemoveContainer" containerID="a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0"
	Oct 24 19:06:10 addons-866342 kubelet[1252]: I1024 19:06:10.379154    1252 scope.go:117] "RemoveContainer" containerID="a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0"
	Oct 24 19:06:10 addons-866342 kubelet[1252]: E1024 19:06:10.381164    1252 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0\": container with ID starting with a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0 not found: ID does not exist" containerID="a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0"
	Oct 24 19:06:10 addons-866342 kubelet[1252]: I1024 19:06:10.381267    1252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0"} err="failed to get container status \"a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0\": rpc error: code = NotFound desc = could not find container \"a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0\": container with ID starting with a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0 not found: ID does not exist"
	Oct 24 19:06:11 addons-866342 kubelet[1252]: I1024 19:06:11.057932    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="85a4a208-cb9e-4c26-8a4f-f939c08527d3" path="/var/lib/kubelet/pods/85a4a208-cb9e-4c26-8a4f-f939c08527d3/volumes"
	
	* 
	* ==> storage-provisioner [aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec] <==
	* I1024 19:02:45.941903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:02:45.960370       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:02:45.960496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:02:45.974451       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:02:45.975132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-866342_b09f27bd-b684-4953-848e-949d7dd75a59!
	I1024 19:02:45.974559       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f49ec8cf-b9c0-4c3b-b848-b2be04049b0a", APIVersion:"v1", ResourceVersion:"971", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-866342_b09f27bd-b684-4953-848e-949d7dd75a59 became leader
	I1024 19:02:46.075313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-866342_b09f27bd-b684-4953-848e-949d7dd75a59!
	
	* 
	* ==> storage-provisioner [d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9] <==
	* I1024 19:02:14.590000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 19:02:44.609185       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-866342 -n addons-866342
helpers_test.go:261: (dbg) Run:  kubectl --context addons-866342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (8.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c4f8q" [c7c62856-a84d-4c73-b4b5-ab373ec3b9c9] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017052835s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-866342
addons_test.go:840: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-866342: exit status 11 (490.175914ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-24T19:03:41Z" level=error msg="stat /run/runc/f45a75f792bfa4e39c7e0ee3f5551a92642103c75ba844ca258e56fcd459720f: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:841: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-866342" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-866342 -n addons-866342
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-866342 logs -n 25: (2.154886193s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | -p download-only-645515                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | -p download-only-645515                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| delete  | -p download-only-645515                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| delete  | -p download-only-645515                                                                     | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-397693 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | binary-mirror-397693                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36043                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-397693                                                                     | binary-mirror-397693 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-866342 --wait=true                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:03 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-866342 ssh cat                                                                       | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | /opt/local-path-provisioner/pvc-36d1a6de-39d6-4c81-a7f0-3bf4da62b74d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-866342 ip                                                                            | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-866342 addons disable                                                                | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-866342 addons                                                                        | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-866342        | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC |                     |
	|         | addons-866342                                                                               |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:53.618092   16652 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:53.618354   16652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:53.618365   16652 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:53.618369   16652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:53.618537   16652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:00:53.619115   16652 out.go:303] Setting JSON to false
	I1024 19:00:53.619927   16652 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2352,"bootTime":1698171702,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:53.619984   16652 start.go:138] virtualization: kvm guest
	I1024 19:00:53.622328   16652 out.go:177] * [addons-866342] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:53.624022   16652 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:00:53.625545   16652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:53.623959   16652 notify.go:220] Checking for updates...
	I1024 19:00:53.628430   16652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:00:53.629887   16652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:53.631239   16652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:00:53.632603   16652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:00:53.634132   16652 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:00:53.664670   16652 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 19:00:53.666142   16652 start.go:298] selected driver: kvm2
	I1024 19:00:53.666155   16652 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:00:53.666165   16652 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:00:53.666855   16652 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:53.666945   16652 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:00:53.680707   16652 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:00:53.680755   16652 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:00:53.680967   16652 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:00:53.681031   16652 cni.go:84] Creating CNI manager for ""
	I1024 19:00:53.681047   16652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:00:53.681061   16652 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 19:00:53.681070   16652 start_flags.go:323] config:
	{Name:addons-866342 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:53.681186   16652 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:53.683088   16652 out.go:177] * Starting control plane node addons-866342 in cluster addons-866342
	I1024 19:00:53.684529   16652 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:00:53.684566   16652 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:53.684576   16652 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:53.684641   16652 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:00:53.684652   16652 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:00:53.684936   16652 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/config.json ...
	I1024 19:00:53.684955   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/config.json: {Name:mk3628ed1574a5393dd97070b77f0feb57c98277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:00:53.685083   16652 start.go:365] acquiring machines lock for addons-866342: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:00:53.685126   16652 start.go:369] acquired machines lock for "addons-866342" in 28.474µs
	I1024 19:00:53.685146   16652 start.go:93] Provisioning new machine with config: &{Name:addons-866342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:00:53.685211   16652 start.go:125] createHost starting for "" (driver="kvm2")
	I1024 19:00:53.686816   16652 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1024 19:00:53.686909   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:00:53.686944   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:00:53.700021   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1024 19:00:53.700425   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:00:53.700948   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:00:53.700969   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:00:53.701280   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:00:53.701467   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:00:53.701617   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:00:53.701768   16652 start.go:159] libmachine.API.Create for "addons-866342" (driver="kvm2")
	I1024 19:00:53.701796   16652 client.go:168] LocalClient.Create starting
	I1024 19:00:53.701825   16652 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem
	I1024 19:00:53.961535   16652 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem
	I1024 19:00:54.441995   16652 main.go:141] libmachine: Running pre-create checks...
	I1024 19:00:54.442018   16652 main.go:141] libmachine: (addons-866342) Calling .PreCreateCheck
	I1024 19:00:54.442525   16652 main.go:141] libmachine: (addons-866342) Calling .GetConfigRaw
	I1024 19:00:54.442976   16652 main.go:141] libmachine: Creating machine...
	I1024 19:00:54.442991   16652 main.go:141] libmachine: (addons-866342) Calling .Create
	I1024 19:00:54.443152   16652 main.go:141] libmachine: (addons-866342) Creating KVM machine...
	I1024 19:00:54.444488   16652 main.go:141] libmachine: (addons-866342) DBG | found existing default KVM network
	I1024 19:00:54.445245   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.445040   16674 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1024 19:00:54.450829   16652 main.go:141] libmachine: (addons-866342) DBG | trying to create private KVM network mk-addons-866342 192.168.39.0/24...
	I1024 19:00:54.516239   16652 main.go:141] libmachine: (addons-866342) DBG | private KVM network mk-addons-866342 192.168.39.0/24 created
	I1024 19:00:54.516283   16652 main.go:141] libmachine: (addons-866342) Setting up store path in /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342 ...
	I1024 19:00:54.516306   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.516238   16674 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:54.516334   16652 main.go:141] libmachine: (addons-866342) Building disk image from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:00:54.516369   16652 main.go:141] libmachine: (addons-866342) Downloading /home/jenkins/minikube-integration/17485-9023/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso...
	I1024 19:00:54.732029   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.731909   16674 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa...
	I1024 19:00:54.806829   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.806710   16674 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/addons-866342.rawdisk...
	I1024 19:00:54.806866   16652 main.go:141] libmachine: (addons-866342) DBG | Writing magic tar header
	I1024 19:00:54.806881   16652 main.go:141] libmachine: (addons-866342) DBG | Writing SSH key tar header
	I1024 19:00:54.806903   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:54.806822   16674 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342 ...
	I1024 19:00:54.806997   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342
	I1024 19:00:54.807044   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342 (perms=drwx------)
	I1024 19:00:54.807073   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines
	I1024 19:00:54.807093   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines (perms=drwxr-xr-x)
	I1024 19:00:54.807124   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube (perms=drwxr-xr-x)
	I1024 19:00:54.807135   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023 (perms=drwxrwxr-x)
	I1024 19:00:54.807143   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1024 19:00:54.807158   16652 main.go:141] libmachine: (addons-866342) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1024 19:00:54.807175   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:54.807191   16652 main.go:141] libmachine: (addons-866342) Creating domain...
	I1024 19:00:54.807205   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023
	I1024 19:00:54.807218   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1024 19:00:54.807226   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home/jenkins
	I1024 19:00:54.807238   16652 main.go:141] libmachine: (addons-866342) DBG | Checking permissions on dir: /home
	I1024 19:00:54.807252   16652 main.go:141] libmachine: (addons-866342) DBG | Skipping /home - not owner
	I1024 19:00:54.808239   16652 main.go:141] libmachine: (addons-866342) define libvirt domain using xml: 
	I1024 19:00:54.808261   16652 main.go:141] libmachine: (addons-866342) <domain type='kvm'>
	I1024 19:00:54.808273   16652 main.go:141] libmachine: (addons-866342)   <name>addons-866342</name>
	I1024 19:00:54.808284   16652 main.go:141] libmachine: (addons-866342)   <memory unit='MiB'>4000</memory>
	I1024 19:00:54.808305   16652 main.go:141] libmachine: (addons-866342)   <vcpu>2</vcpu>
	I1024 19:00:54.808325   16652 main.go:141] libmachine: (addons-866342)   <features>
	I1024 19:00:54.808340   16652 main.go:141] libmachine: (addons-866342)     <acpi/>
	I1024 19:00:54.808353   16652 main.go:141] libmachine: (addons-866342)     <apic/>
	I1024 19:00:54.808378   16652 main.go:141] libmachine: (addons-866342)     <pae/>
	I1024 19:00:54.808410   16652 main.go:141] libmachine: (addons-866342)     
	I1024 19:00:54.808427   16652 main.go:141] libmachine: (addons-866342)   </features>
	I1024 19:00:54.808446   16652 main.go:141] libmachine: (addons-866342)   <cpu mode='host-passthrough'>
	I1024 19:00:54.808463   16652 main.go:141] libmachine: (addons-866342)   
	I1024 19:00:54.808472   16652 main.go:141] libmachine: (addons-866342)   </cpu>
	I1024 19:00:54.808511   16652 main.go:141] libmachine: (addons-866342)   <os>
	I1024 19:00:54.808536   16652 main.go:141] libmachine: (addons-866342)     <type>hvm</type>
	I1024 19:00:54.808544   16652 main.go:141] libmachine: (addons-866342)     <boot dev='cdrom'/>
	I1024 19:00:54.808553   16652 main.go:141] libmachine: (addons-866342)     <boot dev='hd'/>
	I1024 19:00:54.808560   16652 main.go:141] libmachine: (addons-866342)     <bootmenu enable='no'/>
	I1024 19:00:54.808570   16652 main.go:141] libmachine: (addons-866342)   </os>
	I1024 19:00:54.808580   16652 main.go:141] libmachine: (addons-866342)   <devices>
	I1024 19:00:54.808586   16652 main.go:141] libmachine: (addons-866342)     <disk type='file' device='cdrom'>
	I1024 19:00:54.808597   16652 main.go:141] libmachine: (addons-866342)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/boot2docker.iso'/>
	I1024 19:00:54.808613   16652 main.go:141] libmachine: (addons-866342)       <target dev='hdc' bus='scsi'/>
	I1024 19:00:54.808620   16652 main.go:141] libmachine: (addons-866342)       <readonly/>
	I1024 19:00:54.808633   16652 main.go:141] libmachine: (addons-866342)     </disk>
	I1024 19:00:54.808643   16652 main.go:141] libmachine: (addons-866342)     <disk type='file' device='disk'>
	I1024 19:00:54.808656   16652 main.go:141] libmachine: (addons-866342)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1024 19:00:54.808669   16652 main.go:141] libmachine: (addons-866342)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/addons-866342.rawdisk'/>
	I1024 19:00:54.808678   16652 main.go:141] libmachine: (addons-866342)       <target dev='hda' bus='virtio'/>
	I1024 19:00:54.808686   16652 main.go:141] libmachine: (addons-866342)     </disk>
	I1024 19:00:54.808695   16652 main.go:141] libmachine: (addons-866342)     <interface type='network'>
	I1024 19:00:54.808702   16652 main.go:141] libmachine: (addons-866342)       <source network='mk-addons-866342'/>
	I1024 19:00:54.808714   16652 main.go:141] libmachine: (addons-866342)       <model type='virtio'/>
	I1024 19:00:54.808724   16652 main.go:141] libmachine: (addons-866342)     </interface>
	I1024 19:00:54.808730   16652 main.go:141] libmachine: (addons-866342)     <interface type='network'>
	I1024 19:00:54.808751   16652 main.go:141] libmachine: (addons-866342)       <source network='default'/>
	I1024 19:00:54.808771   16652 main.go:141] libmachine: (addons-866342)       <model type='virtio'/>
	I1024 19:00:54.808786   16652 main.go:141] libmachine: (addons-866342)     </interface>
	I1024 19:00:54.808799   16652 main.go:141] libmachine: (addons-866342)     <serial type='pty'>
	I1024 19:00:54.808810   16652 main.go:141] libmachine: (addons-866342)       <target port='0'/>
	I1024 19:00:54.808827   16652 main.go:141] libmachine: (addons-866342)     </serial>
	I1024 19:00:54.808838   16652 main.go:141] libmachine: (addons-866342)     <console type='pty'>
	I1024 19:00:54.808847   16652 main.go:141] libmachine: (addons-866342)       <target type='serial' port='0'/>
	I1024 19:00:54.808867   16652 main.go:141] libmachine: (addons-866342)     </console>
	I1024 19:00:54.808880   16652 main.go:141] libmachine: (addons-866342)     <rng model='virtio'>
	I1024 19:00:54.808895   16652 main.go:141] libmachine: (addons-866342)       <backend model='random'>/dev/random</backend>
	I1024 19:00:54.808932   16652 main.go:141] libmachine: (addons-866342)     </rng>
	I1024 19:00:54.808946   16652 main.go:141] libmachine: (addons-866342)     
	I1024 19:00:54.808968   16652 main.go:141] libmachine: (addons-866342)     
	I1024 19:00:54.808990   16652 main.go:141] libmachine: (addons-866342)   </devices>
	I1024 19:00:54.809013   16652 main.go:141] libmachine: (addons-866342) </domain>
	I1024 19:00:54.809029   16652 main.go:141] libmachine: (addons-866342) 
	I1024 19:00:54.814499   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:a9:fb:bb in network default
	I1024 19:00:54.815124   16652 main.go:141] libmachine: (addons-866342) Ensuring networks are active...
	I1024 19:00:54.815155   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:54.815811   16652 main.go:141] libmachine: (addons-866342) Ensuring network default is active
	I1024 19:00:54.816130   16652 main.go:141] libmachine: (addons-866342) Ensuring network mk-addons-866342 is active
	I1024 19:00:54.816597   16652 main.go:141] libmachine: (addons-866342) Getting domain xml...
	I1024 19:00:54.817251   16652 main.go:141] libmachine: (addons-866342) Creating domain...
	I1024 19:00:56.222041   16652 main.go:141] libmachine: (addons-866342) Waiting to get IP...
	I1024 19:00:56.222681   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:56.222988   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:56.223045   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:56.222981   16674 retry.go:31] will retry after 235.339237ms: waiting for machine to come up
	I1024 19:00:56.460449   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:56.460837   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:56.460857   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:56.460803   16674 retry.go:31] will retry after 375.487717ms: waiting for machine to come up
	I1024 19:00:56.838287   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:56.838659   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:56.838679   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:56.838609   16674 retry.go:31] will retry after 362.75156ms: waiting for machine to come up
	I1024 19:00:57.203285   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:57.203703   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:57.203726   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:57.203690   16674 retry.go:31] will retry after 600.274701ms: waiting for machine to come up
	I1024 19:00:57.805396   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:57.805777   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:57.805803   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:57.805738   16674 retry.go:31] will retry after 755.565775ms: waiting for machine to come up
	I1024 19:00:58.562657   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:58.563095   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:58.563124   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:58.563074   16674 retry.go:31] will retry after 792.580761ms: waiting for machine to come up
	I1024 19:00:59.357583   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:00:59.357901   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:00:59.357925   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:00:59.357843   16674 retry.go:31] will retry after 1.073478461s: waiting for machine to come up
	I1024 19:01:00.433104   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:00.433519   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:00.433547   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:00.433457   16674 retry.go:31] will retry after 1.342291864s: waiting for machine to come up
	I1024 19:01:01.777946   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:01.778301   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:01.778334   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:01.778248   16674 retry.go:31] will retry after 1.848774692s: waiting for machine to come up
	I1024 19:01:03.629233   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:03.629747   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:03.629768   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:03.629690   16674 retry.go:31] will retry after 2.253036424s: waiting for machine to come up
	I1024 19:01:05.885559   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:05.886049   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:05.886076   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:05.886015   16674 retry.go:31] will retry after 2.239298601s: waiting for machine to come up
	I1024 19:01:08.126420   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:08.126691   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:08.126744   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:08.126652   16674 retry.go:31] will retry after 2.332501495s: waiting for machine to come up
	I1024 19:01:10.461530   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:10.461831   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:10.461860   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:10.461793   16674 retry.go:31] will retry after 4.390039765s: waiting for machine to come up
	I1024 19:01:14.853207   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:14.853630   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find current IP address of domain addons-866342 in network mk-addons-866342
	I1024 19:01:14.853655   16652 main.go:141] libmachine: (addons-866342) DBG | I1024 19:01:14.853589   16674 retry.go:31] will retry after 4.206775238s: waiting for machine to come up
	I1024 19:01:19.062273   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.062676   16652 main.go:141] libmachine: (addons-866342) Found IP for machine: 192.168.39.163
	I1024 19:01:19.062727   16652 main.go:141] libmachine: (addons-866342) Reserving static IP address...
	I1024 19:01:19.062751   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has current primary IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.063058   16652 main.go:141] libmachine: (addons-866342) DBG | unable to find host DHCP lease matching {name: "addons-866342", mac: "52:54:00:26:c1:28", ip: "192.168.39.163"} in network mk-addons-866342
	I1024 19:01:19.131763   16652 main.go:141] libmachine: (addons-866342) DBG | Getting to WaitForSSH function...
	I1024 19:01:19.131791   16652 main.go:141] libmachine: (addons-866342) Reserved static IP address: 192.168.39.163
	I1024 19:01:19.131835   16652 main.go:141] libmachine: (addons-866342) Waiting for SSH to be available...
	I1024 19:01:19.134337   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.134707   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.134739   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.134908   16652 main.go:141] libmachine: (addons-866342) DBG | Using SSH client type: external
	I1024 19:01:19.134936   16652 main.go:141] libmachine: (addons-866342) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa (-rw-------)
	I1024 19:01:19.134982   16652 main.go:141] libmachine: (addons-866342) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:01:19.134996   16652 main.go:141] libmachine: (addons-866342) DBG | About to run SSH command:
	I1024 19:01:19.135006   16652 main.go:141] libmachine: (addons-866342) DBG | exit 0
	I1024 19:01:19.277327   16652 main.go:141] libmachine: (addons-866342) DBG | SSH cmd err, output: <nil>: 
	I1024 19:01:19.277549   16652 main.go:141] libmachine: (addons-866342) KVM machine creation complete!
	I1024 19:01:19.277865   16652 main.go:141] libmachine: (addons-866342) Calling .GetConfigRaw
	I1024 19:01:19.278420   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:19.278604   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:19.278764   16652 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1024 19:01:19.278784   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:19.280117   16652 main.go:141] libmachine: Detecting operating system of created instance...
	I1024 19:01:19.280131   16652 main.go:141] libmachine: Waiting for SSH to be available...
	I1024 19:01:19.280137   16652 main.go:141] libmachine: Getting to WaitForSSH function...
	I1024 19:01:19.280144   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.281975   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.282291   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.282316   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.282448   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.282622   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.282758   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.282878   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.283007   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.283396   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.283410   16652 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1024 19:01:19.412538   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:01:19.412566   16652 main.go:141] libmachine: Detecting the provisioner...
	I1024 19:01:19.412577   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.415189   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.415502   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.415536   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.415650   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.415830   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.415986   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.416122   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.416290   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.416613   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.416628   16652 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1024 19:01:19.546090   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g71212f5-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1024 19:01:19.546162   16652 main.go:141] libmachine: found compatible host: buildroot
	I1024 19:01:19.546177   16652 main.go:141] libmachine: Provisioning with buildroot...
	I1024 19:01:19.546189   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:01:19.546420   16652 buildroot.go:166] provisioning hostname "addons-866342"
	I1024 19:01:19.546439   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:01:19.546622   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.549169   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.549524   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.549579   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.549685   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.549861   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.550002   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.550152   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.550372   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.550740   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.550758   16652 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-866342 && echo "addons-866342" | sudo tee /etc/hostname
	I1024 19:01:19.690042   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-866342
	
	I1024 19:01:19.690066   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.692641   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.693114   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.693149   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.693250   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.693407   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.693577   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.693715   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.693859   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.694182   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:19.694200   16652 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-866342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-866342/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-866342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:01:19.828475   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:01:19.828504   16652 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:01:19.828529   16652 buildroot.go:174] setting up certificates
	I1024 19:01:19.828538   16652 provision.go:83] configureAuth start
	I1024 19:01:19.828549   16652 main.go:141] libmachine: (addons-866342) Calling .GetMachineName
	I1024 19:01:19.828773   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:19.831502   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.831850   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.831885   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.832000   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.834270   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.834643   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.834674   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.834811   16652 provision.go:138] copyHostCerts
	I1024 19:01:19.834863   16652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:01:19.835005   16652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:01:19.835088   16652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:01:19.835146   16652 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.addons-866342 san=[192.168.39.163 192.168.39.163 localhost 127.0.0.1 minikube addons-866342]
	I1024 19:01:19.938205   16652 provision.go:172] copyRemoteCerts
	I1024 19:01:19.938265   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:01:19.938288   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:19.940745   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.941073   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:19.941099   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:19.941316   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:19.941501   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:19.941649   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:19.941860   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.035343   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:01:20.059410   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1024 19:01:20.082392   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:01:20.105763   16652 provision.go:86] duration metric: configureAuth took 277.20792ms
	I1024 19:01:20.105793   16652 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:01:20.106026   16652 config.go:182] Loaded profile config "addons-866342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:20.106115   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.108682   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.109032   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.109077   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.109213   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.109406   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.109548   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.109652   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.109840   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:20.110232   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:20.110255   16652 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:01:20.469265   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:01:20.469288   16652 main.go:141] libmachine: Checking connection to Docker...
	I1024 19:01:20.469330   16652 main.go:141] libmachine: (addons-866342) Calling .GetURL
	I1024 19:01:20.470540   16652 main.go:141] libmachine: (addons-866342) DBG | Using libvirt version 6000000
	I1024 19:01:20.473891   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.474292   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.474324   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.474514   16652 main.go:141] libmachine: Docker is up and running!
	I1024 19:01:20.474528   16652 main.go:141] libmachine: Reticulating splines...
	I1024 19:01:20.474535   16652 client.go:171] LocalClient.Create took 26.772732668s
	I1024 19:01:20.474554   16652 start.go:167] duration metric: libmachine.API.Create for "addons-866342" took 26.772787359s
	I1024 19:01:20.474566   16652 start.go:300] post-start starting for "addons-866342" (driver="kvm2")
	I1024 19:01:20.474579   16652 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:01:20.474602   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.474832   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:01:20.474863   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.476800   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.477115   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.477157   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.477285   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.477449   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.477588   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.477711   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.570396   16652 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:01:20.574655   16652 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:01:20.574676   16652 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:01:20.574736   16652 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:01:20.574757   16652 start.go:303] post-start completed in 100.185858ms
	I1024 19:01:20.574789   16652 main.go:141] libmachine: (addons-866342) Calling .GetConfigRaw
	I1024 19:01:20.630665   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:20.633368   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.633685   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.633713   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.634079   16652 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/config.json ...
	I1024 19:01:20.634288   16652 start.go:128] duration metric: createHost completed in 26.949067524s
	I1024 19:01:20.634314   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.636708   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.637006   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.637036   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.637144   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.637354   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.637512   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.637663   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.637820   16652 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:20.638154   16652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I1024 19:01:20.638166   16652 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:01:20.770027   16652 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698174080.754216177
	
	I1024 19:01:20.770046   16652 fix.go:206] guest clock: 1698174080.754216177
	I1024 19:01:20.770053   16652 fix.go:219] Guest: 2023-10-24 19:01:20.754216177 +0000 UTC Remote: 2023-10-24 19:01:20.634300487 +0000 UTC m=+27.067710926 (delta=119.91569ms)
	I1024 19:01:20.770072   16652 fix.go:190] guest clock delta is within tolerance: 119.91569ms
	I1024 19:01:20.770079   16652 start.go:83] releasing machines lock for "addons-866342", held for 27.084939654s
	I1024 19:01:20.770107   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.770358   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:20.773083   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.773425   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.773458   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.773629   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.774055   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.774215   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:20.774307   16652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:01:20.774351   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.774470   16652 ssh_runner.go:195] Run: cat /version.json
	I1024 19:01:20.774496   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:20.776892   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777136   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777260   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.777306   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777405   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.777517   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:20.777547   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:20.777566   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.777722   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:20.777792   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.777963   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:20.777953   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.778123   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:20.778258   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:20.932612   16652 ssh_runner.go:195] Run: systemctl --version
	I1024 19:01:20.938736   16652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:01:21.597436   16652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:01:21.603674   16652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:01:21.603745   16652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:01:21.620393   16652 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:01:21.620414   16652 start.go:472] detecting cgroup driver to use...
	I1024 19:01:21.620474   16652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:01:21.637583   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:01:21.650249   16652 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:01:21.650318   16652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:01:21.663323   16652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:01:21.676403   16652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:01:21.779753   16652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:01:21.895677   16652 docker.go:214] disabling docker service ...
	I1024 19:01:21.895754   16652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:01:21.908479   16652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:01:21.920037   16652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:01:22.019018   16652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:01:22.136023   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:01:22.148608   16652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:01:22.165724   16652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:01:22.165776   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.174313   16652 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:01:22.174364   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.183213   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.191818   16652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:22.200747   16652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:01:22.209531   16652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:01:22.217144   16652 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:01:22.217178   16652 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:01:22.229466   16652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:01:22.237027   16652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:01:22.349394   16652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:01:22.510565   16652 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:01:22.510644   16652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:01:22.515621   16652 start.go:540] Will wait 60s for crictl version
	I1024 19:01:22.515669   16652 ssh_runner.go:195] Run: which crictl
	I1024 19:01:22.522282   16652 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:01:22.562264   16652 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:01:22.562363   16652 ssh_runner.go:195] Run: crio --version
	I1024 19:01:22.605750   16652 ssh_runner.go:195] Run: crio --version
	I1024 19:01:22.663303   16652 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:01:22.665251   16652 main.go:141] libmachine: (addons-866342) Calling .GetIP
	I1024 19:01:22.668301   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:22.668630   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:22.668670   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:22.668815   16652 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:01:22.672932   16652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:01:22.685527   16652 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:01:22.685580   16652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:01:22.718198   16652 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 19:01:22.718257   16652 ssh_runner.go:195] Run: which lz4
	I1024 19:01:22.722016   16652 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:01:22.725953   16652 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:01:22.725981   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 19:01:24.471185   16652 crio.go:444] Took 1.749190 seconds to copy over tarball
	I1024 19:01:24.471252   16652 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:01:27.424688   16652 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.953407338s)
	I1024 19:01:27.424715   16652 crio.go:451] Took 2.953507 seconds to extract the tarball
	I1024 19:01:27.424723   16652 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:01:27.465621   16652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:01:27.536656   16652 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:01:27.536682   16652 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:01:27.536768   16652 ssh_runner.go:195] Run: crio config
	I1024 19:01:27.603082   16652 cni.go:84] Creating CNI manager for ""
	I1024 19:01:27.603106   16652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:01:27.603129   16652 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:01:27.603151   16652 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-866342 NodeName:addons-866342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:01:27.603323   16652 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-866342"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:01:27.603416   16652 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-866342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:01:27.603497   16652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:01:27.612394   16652 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:01:27.612459   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:01:27.620511   16652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1024 19:01:27.637001   16652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:01:27.654181   16652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1024 19:01:27.671009   16652 ssh_runner.go:195] Run: grep 192.168.39.163	control-plane.minikube.internal$ /etc/hosts
	I1024 19:01:27.674681   16652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:01:27.685472   16652 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342 for IP: 192.168.39.163
	I1024 19:01:27.685511   16652 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.685629   16652 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:01:27.781869   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt ...
	I1024 19:01:27.781899   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt: {Name:mk5986d412e7800237b3efcd0cbb9849437180c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.782051   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key ...
	I1024 19:01:27.782061   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key: {Name:mkfff13cbfa1679f2c22954f13a806f8b04b8c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.782129   16652 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:01:27.895659   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt ...
	I1024 19:01:27.895684   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt: {Name:mkfa7ee4955395e6d99ed1452389a5750c3b1665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.895812   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key ...
	I1024 19:01:27.895822   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key: {Name:mk489124e20b3e297af3411bd0d812f2e771776f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:27.895924   16652 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.key
	I1024 19:01:27.895938   16652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt with IP's: []
	I1024 19:01:28.035476   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt ...
	I1024 19:01:28.035505   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: {Name:mk1a541f9512dfeb8d36c62970e267637fe02fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.035642   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.key ...
	I1024 19:01:28.035653   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.key: {Name:mk302e4370628b1ce6f2b5b21c790bd66ebab1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.035716   16652 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8
	I1024 19:01:28.035734   16652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8 with IP's: [192.168.39.163 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:01:28.181446   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8 ...
	I1024 19:01:28.181471   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8: {Name:mk69844b9e5c4ab3149565250143ae625374bad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.181609   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8 ...
	I1024 19:01:28.181619   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8: {Name:mk8ecef18ebccb05b2d420f450e8ebd230667ad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.181682   16652 certs.go:337] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt.a64e5ae8 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt
	I1024 19:01:28.181762   16652 certs.go:341] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key.a64e5ae8 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key
	I1024 19:01:28.181809   16652 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key
	I1024 19:01:28.181824   16652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt with IP's: []
	I1024 19:01:28.304931   16652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt ...
	I1024 19:01:28.304955   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt: {Name:mkb4d56decaceefc744af9b1328b7073d4ce7707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.305088   16652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key ...
	I1024 19:01:28.305100   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key: {Name:mk2931679b2b1bb2ba63ae9a95bb1a04e4212768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:28.305261   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:01:28.305315   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:01:28.305350   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:01:28.305383   16652 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:01:28.305915   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:01:28.332178   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:01:28.354638   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:01:28.376739   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:01:28.399183   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:01:28.421963   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:01:28.444449   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:01:28.469716   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:01:28.492307   16652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:01:28.516336   16652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:01:28.536768   16652 ssh_runner.go:195] Run: openssl version
	I1024 19:01:28.542636   16652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:01:28.551917   16652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:28.556610   16652 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:28.556763   16652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:28.562604   16652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:01:28.572374   16652 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:01:28.576716   16652 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:01:28.576756   16652 kubeadm.go:404] StartCluster: {Name:addons-866342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.3 ClusterName:addons-866342 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:01:28.576824   16652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:01:28.576867   16652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:01:28.621983   16652 cri.go:89] found id: ""
	I1024 19:01:28.749064   16652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:01:28.758460   16652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:01:28.766781   16652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:01:28.774943   16652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:01:28.774985   16652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1024 19:01:28.942631   16652 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:01:41.071771   16652 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:01:41.071846   16652 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:01:41.071931   16652 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:01:41.072041   16652 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:01:41.072220   16652 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:01:41.072303   16652 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:01:41.074115   16652 out.go:204]   - Generating certificates and keys ...
	I1024 19:01:41.074209   16652 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:01:41.074268   16652 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:01:41.074324   16652 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:01:41.074402   16652 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:01:41.074454   16652 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:01:41.074501   16652 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:01:41.074574   16652 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:01:41.074734   16652 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-866342 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I1024 19:01:41.074800   16652 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:01:41.074943   16652 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-866342 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I1024 19:01:41.075013   16652 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:01:41.075110   16652 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:01:41.075186   16652 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:01:41.075253   16652 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:01:41.075332   16652 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:01:41.075410   16652 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:01:41.075520   16652 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:01:41.075574   16652 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:01:41.075640   16652 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:01:41.075712   16652 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:01:41.078275   16652 out.go:204]   - Booting up control plane ...
	I1024 19:01:41.078378   16652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:01:41.078483   16652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:01:41.078552   16652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:01:41.078656   16652 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:01:41.078758   16652 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:01:41.078799   16652 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:01:41.078962   16652 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:01:41.079065   16652 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002101 seconds
	I1024 19:01:41.079212   16652 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:01:41.079368   16652 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:01:41.079418   16652 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:01:41.079631   16652 kubeadm.go:322] [mark-control-plane] Marking the node addons-866342 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:01:41.079688   16652 kubeadm.go:322] [bootstrap-token] Using token: a0j6ox.ibf86dwwapxuzwwq
	I1024 19:01:41.081277   16652 out.go:204]   - Configuring RBAC rules ...
	I1024 19:01:41.081410   16652 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:01:41.081507   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:01:41.081660   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:01:41.081776   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:01:41.081866   16652 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:01:41.081931   16652 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:01:41.082029   16652 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:01:41.082089   16652 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:01:41.082158   16652 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:01:41.082170   16652 kubeadm.go:322] 
	I1024 19:01:41.082284   16652 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:01:41.082310   16652 kubeadm.go:322] 
	I1024 19:01:41.082408   16652 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:01:41.082415   16652 kubeadm.go:322] 
	I1024 19:01:41.082434   16652 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:01:41.082483   16652 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:01:41.082528   16652 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:01:41.082537   16652 kubeadm.go:322] 
	I1024 19:01:41.082598   16652 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:01:41.082615   16652 kubeadm.go:322] 
	I1024 19:01:41.082695   16652 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:01:41.082705   16652 kubeadm.go:322] 
	I1024 19:01:41.082775   16652 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:01:41.082838   16652 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:01:41.082899   16652 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:01:41.082905   16652 kubeadm.go:322] 
	I1024 19:01:41.082969   16652 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:01:41.083061   16652 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:01:41.083076   16652 kubeadm.go:322] 
	I1024 19:01:41.083148   16652 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a0j6ox.ibf86dwwapxuzwwq \
	I1024 19:01:41.083230   16652 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 19:01:41.083248   16652 kubeadm.go:322] 	--control-plane 
	I1024 19:01:41.083251   16652 kubeadm.go:322] 
	I1024 19:01:41.083316   16652 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:01:41.083323   16652 kubeadm.go:322] 
	I1024 19:01:41.083384   16652 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a0j6ox.ibf86dwwapxuzwwq \
	I1024 19:01:41.083547   16652 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:01:41.083562   16652 cni.go:84] Creating CNI manager for ""
	I1024 19:01:41.083568   16652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:01:41.086053   16652 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:01:41.087442   16652 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:01:41.112058   16652 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:01:41.160187   16652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:01:41.160266   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.160293   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=addons-866342 minikube.k8s.io/updated_at=2023_10_24T19_01_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.214190   16652 ops.go:34] apiserver oom_adj: -16
	I1024 19:01:41.323047   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.433148   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:42.037613   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:42.537263   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:43.037986   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:43.537170   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:44.037466   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:44.537338   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:45.037949   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:45.537159   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:46.037753   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:46.537147   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:47.037719   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:47.537042   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:48.037582   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:48.537805   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:49.037016   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:49.537656   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:50.037866   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:50.537863   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:51.037558   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:51.537874   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:52.037253   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:52.537520   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:53.037052   16652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:53.156643   16652 kubeadm.go:1081] duration metric: took 11.996430206s to wait for elevateKubeSystemPrivileges.
	I1024 19:01:53.156671   16652 kubeadm.go:406] StartCluster complete in 24.579917364s
	I1024 19:01:53.156687   16652 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:53.156803   16652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:01:53.157191   16652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:53.157402   16652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:01:53.157493   16652 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1024 19:01:53.157582   16652 addons.go:69] Setting volumesnapshots=true in profile "addons-866342"
	I1024 19:01:53.157599   16652 addons.go:69] Setting default-storageclass=true in profile "addons-866342"
	I1024 19:01:53.157601   16652 addons.go:69] Setting ingress-dns=true in profile "addons-866342"
	I1024 19:01:53.157617   16652 addons.go:69] Setting registry=true in profile "addons-866342"
	I1024 19:01:53.157619   16652 addons.go:231] Setting addon ingress-dns=true in "addons-866342"
	I1024 19:01:53.157625   16652 config.go:182] Loaded profile config "addons-866342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:53.157638   16652 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-866342"
	I1024 19:01:53.157642   16652 addons.go:69] Setting storage-provisioner=true in profile "addons-866342"
	I1024 19:01:53.157653   16652 addons.go:231] Setting addon storage-provisioner=true in "addons-866342"
	I1024 19:01:53.157643   16652 addons.go:69] Setting inspektor-gadget=true in profile "addons-866342"
	I1024 19:01:53.157628   16652 addons.go:231] Setting addon registry=true in "addons-866342"
	I1024 19:01:53.157665   16652 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-866342"
	I1024 19:01:53.157667   16652 addons.go:231] Setting addon inspektor-gadget=true in "addons-866342"
	I1024 19:01:53.157674   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157696   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157697   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157707   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.157709   16652 addons.go:69] Setting metrics-server=true in profile "addons-866342"
	I1024 19:01:53.157719   16652 addons.go:231] Setting addon metrics-server=true in "addons-866342"
	I1024 19:01:53.157747   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158095   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158096   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158103   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158095   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158111   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158126   16652 addons.go:69] Setting helm-tiller=true in profile "addons-866342"
	I1024 19:01:53.158141   16652 addons.go:231] Setting addon helm-tiller=true in "addons-866342"
	I1024 19:01:53.158143   16652 addons.go:69] Setting gcp-auth=true in profile "addons-866342"
	I1024 19:01:53.158149   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158162   16652 mustload.go:65] Loading cluster: addons-866342
	I1024 19:01:53.158174   16652 addons.go:69] Setting ingress=true in profile "addons-866342"
	I1024 19:01:53.157629   16652 addons.go:69] Setting cloud-spanner=true in profile "addons-866342"
	I1024 19:01:53.158181   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158186   16652 addons.go:231] Setting addon ingress=true in "addons-866342"
	I1024 19:01:53.158189   16652 addons.go:231] Setting addon cloud-spanner=true in "addons-866342"
	I1024 19:01:53.158175   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158223   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158131   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.157608   16652 addons.go:231] Setting addon volumesnapshots=true in "addons-866342"
	I1024 19:01:53.158326   16652 config.go:182] Loaded profile config "addons-866342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:53.158525   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158535   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.157619   16652 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-866342"
	I1024 19:01:53.158552   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158600   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.157697   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158641   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158669   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158163   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158753   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.158802   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158877   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158899   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.158926   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.158128   16652 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-866342"
	I1024 19:01:53.158941   16652 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-866342"
	I1024 19:01:53.158944   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.157623   16652 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-866342"
	I1024 19:01:53.158991   16652 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-866342"
	I1024 19:01:53.159179   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.159251   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.159268   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.159292   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.159542   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.159574   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.178136   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I1024 19:01:53.178998   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.179518   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.179538   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.179898   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.180113   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.180820   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I1024 19:01:53.180979   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I1024 19:01:53.181392   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.181949   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.181966   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.182364   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.182948   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.182984   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.183423   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.183792   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.183820   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.184419   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.184964   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.184994   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.185339   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.185520   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.187919   16652 addons.go:231] Setting addon default-storageclass=true in "addons-866342"
	I1024 19:01:53.187954   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.188313   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.188343   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.188521   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I1024 19:01:53.196209   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39725
	I1024 19:01:53.196352   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I1024 19:01:53.196441   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I1024 19:01:53.196779   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.197483   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.197737   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.197782   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.197789   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.197854   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.197887   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.197967   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.197981   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.198222   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.198389   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.198414   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.198656   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.198672   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.199120   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.199194   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.199212   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.199271   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.199311   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.199742   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.199769   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.199897   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.199930   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.199995   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I1024 19:01:53.200640   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.200673   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.200880   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.201396   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.201430   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.201623   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.202059   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.202075   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.202367   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.202818   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.202845   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.213887   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1024 19:01:53.213962   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I1024 19:01:53.214014   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I1024 19:01:53.214374   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.214468   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.214780   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.215238   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.215254   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.215364   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.215376   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.215485   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.215496   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.215838   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.216364   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.216408   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.216842   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.217355   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.217387   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.217828   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.217990   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.231014   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45837
	I1024 19:01:53.231492   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.231588   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I1024 19:01:53.232072   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.232511   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.232530   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.232651   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.232662   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.232979   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.233267   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.233857   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.233895   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.234040   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.234094   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.234336   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I1024 19:01:53.235114   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.235182   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I1024 19:01:53.235586   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.236033   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.236052   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.236402   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.236536   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.236819   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.236836   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.236897   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I1024 19:01:53.237204   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.237284   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.237518   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.237699   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.237715   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.237772   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33545
	I1024 19:01:53.238142   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.238195   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.238402   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.238563   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.238582   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.238597   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.239132   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.240655   16652 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1024 19:01:53.239431   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.239554   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.240983   16652 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-866342"
	I1024 19:01:53.244713   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1024 19:01:53.244729   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I1024 19:01:53.245145   16652 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:01:53.245157   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1024 19:01:53.245175   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.245261   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:01:53.245677   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.245707   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.246874   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I1024 19:01:53.248548   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1024 19:01:53.247504   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.247505   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.247809   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.249037   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I1024 19:01:53.251664   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1024 19:01:53.250443   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.250476   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.250512   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.250590   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.250638   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.250777   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.251274   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.253153   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1024 19:01:53.251742   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.251755   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.251779   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.251796   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.252277   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.252294   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.252756   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I1024 19:01:53.254519   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.254587   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.254843   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.254930   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.254958   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.255090   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.255677   16652 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1024 19:01:53.256030   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.256887   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1024 19:01:53.257414   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.257493   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.257510   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.257956   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.258769   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.258794   16652 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1024 19:01:53.258807   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1024 19:01:53.258826   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.258861   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.258889   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.257981   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.259064   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.258242   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.258468   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I1024 19:01:53.258724   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1024 19:01:53.259468   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.259660   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.259702   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.260528   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.260909   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
	I1024 19:01:53.262557   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1024 19:01:53.263218   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.263248   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I1024 19:01:53.264174   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.264456   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1024 19:01:53.264467   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.265556   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1024 19:01:53.266759   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:53.265579   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.265235   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.265273   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.265459   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.264717   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.264909   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.266144   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.268012   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.268356   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.268428   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1024 19:01:53.269587   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.269969   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.270831   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:53.271029   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.271197   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.271924   16652 out.go:177]   - Using image docker.io/registry:2.8.3
	I1024 19:01:53.272291   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.272925   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I1024 19:01:53.273019   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1024 19:01:53.273396   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.274203   16652 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1024 19:01:53.275576   16652 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1024 19:01:53.275593   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1024 19:01:53.274194   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.275609   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.276972   16652 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:01:53.276985   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1024 19:01:53.276998   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.274490   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.274566   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.274677   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.274698   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.274726   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.276112   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.278490   16652 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1024 19:01:53.278574   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.278620   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1024 19:01:53.281460   16652 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1024 19:01:53.281470   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1024 19:01:53.281492   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.281460   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1024 19:01:53.281526   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.279521   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:01:53.281589   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:01:53.279190   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.282181   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.282276   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.282463   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.282518   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.283191   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.283795   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I1024 19:01:53.283966   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.284031   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.284334   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I1024 19:01:53.286221   16652 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:01:53.284357   16652 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:01:53.284390   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.284677   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.285213   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.285235   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.286043   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.286148   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.286474   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.286617   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.287639   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:01:53.287654   16652 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:01:53.287668   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:01:53.287684   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.287657   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.287730   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.287741   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.287749   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.287763   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.287783   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.287803   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.288336   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.288365   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.288420   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.288431   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.288449   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.288458   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.288486   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.288509   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.289859   16652 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1024 19:01:53.291015   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:01:53.291028   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:01:53.291044   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.288972   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.288996   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.292340   16652 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1024 19:01:53.289059   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.289062   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.289073   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.289081   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.289212   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.291312   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.291335   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.291359   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.292103   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.292132   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.292403   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.293585   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1024 19:01:53.293595   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1024 19:01:53.293611   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.293644   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.293682   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.293699   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.293724   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.293744   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.294392   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.294409   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294478   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294515   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.294535   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.294573   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294612   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.294949   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.294970   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.295019   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.295030   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.295063   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.295078   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.295091   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.295187   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.295235   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.295675   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.295920   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.295937   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.295918   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.296361   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.296417   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.296500   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.298035   16652 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1024 19:01:53.297489   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.297923   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.300495   16652 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:01:53.299309   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.300513   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1024 19:01:53.300527   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.300534   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.299321   16652 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1024 19:01:53.299500   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.301953   16652 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1024 19:01:53.301966   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1024 19:01:53.301986   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.301995   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.302098   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.303048   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.303357   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.303375   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.303514   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.303674   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.303785   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.303916   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	W1024 19:01:53.304750   16652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43222->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.304778   16652 retry.go:31] will retry after 200.567523ms: ssh: handshake failed: read tcp 192.168.39.1:43222->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.304849   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.305250   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.305268   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.305469   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.305626   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.305773   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.305895   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	W1024 19:01:53.306797   16652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43236->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.306818   16652 retry.go:31] will retry after 257.504283ms: ssh: handshake failed: read tcp 192.168.39.1:43236->192.168.39.163:22: read: connection reset by peer
	I1024 19:01:53.308758   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I1024 19:01:53.309057   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:01:53.309491   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:01:53.309509   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:01:53.309831   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:01:53.310008   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:01:53.311355   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:01:53.312921   16652 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1024 19:01:53.314352   16652 out.go:177]   - Using image docker.io/busybox:stable
	I1024 19:01:53.315604   16652 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:01:53.315614   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1024 19:01:53.315626   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:01:53.317998   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.318308   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:01:53.318321   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:01:53.318448   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:01:53.318594   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:01:53.318720   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:01:53.318855   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:01:53.444095   16652 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-866342" context rescaled to 1 replicas
	I1024 19:01:53.444138   16652 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:01:53.445780   16652 out.go:177] * Verifying Kubernetes components...
	I1024 19:01:53.447875   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:01:53.461861   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:01:53.473443   16652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:01:53.485254   16652 node_ready.go:35] waiting up to 6m0s for node "addons-866342" to be "Ready" ...
	I1024 19:01:53.527192   16652 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1024 19:01:53.527212   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1024 19:01:53.534833   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:01:53.604120   16652 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1024 19:01:53.604143   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1024 19:01:53.610586   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:01:53.616641   16652 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1024 19:01:53.616663   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1024 19:01:53.623559   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:01:53.623576   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1024 19:01:53.623720   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:01:53.631599   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:01:53.646077   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1024 19:01:53.646098   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1024 19:01:53.671240   16652 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1024 19:01:53.671259   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1024 19:01:53.681917   16652 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1024 19:01:53.681941   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1024 19:01:53.715877   16652 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1024 19:01:53.715894   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1024 19:01:53.762595   16652 node_ready.go:49] node "addons-866342" has status "Ready":"True"
	I1024 19:01:53.762615   16652 node_ready.go:38] duration metric: took 277.327308ms waiting for node "addons-866342" to be "Ready" ...
	I1024 19:01:53.762624   16652 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:01:53.803167   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:01:53.803185   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:01:53.819970   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1024 19:01:53.819996   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1024 19:01:53.841504   16652 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1024 19:01:53.841532   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1024 19:01:53.893178   16652 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:01:53.893202   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1024 19:01:53.905813   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1024 19:01:53.912073   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1024 19:01:53.917237   16652 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1024 19:01:53.917254   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1024 19:01:53.933071   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:01:53.946730   16652 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:01:53.946750   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:01:53.974195   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1024 19:01:53.974215   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1024 19:01:54.008333   16652 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1024 19:01:54.008353   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1024 19:01:54.094615   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:01:54.152965   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:01:54.158271   16652 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1024 19:01:54.158287   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1024 19:01:54.160847   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1024 19:01:54.160867   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1024 19:01:54.189404   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1024 19:01:54.189421   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1024 19:01:54.277353   16652 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1024 19:01:54.277376   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1024 19:01:54.283319   16652 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:54.283342   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1024 19:01:54.294904   16652 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1024 19:01:54.294927   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1024 19:01:54.359955   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1024 19:01:54.359980   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1024 19:01:54.378805   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:54.397269   16652 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1024 19:01:54.397291   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1024 19:01:54.467878   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1024 19:01:54.467901   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1024 19:01:54.513926   16652 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:01:54.513946   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1024 19:01:54.555510   16652 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace to be "Ready" ...
	I1024 19:01:54.562615   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1024 19:01:54.562631   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1024 19:01:54.581769   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:01:54.631207   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1024 19:01:54.631226   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1024 19:01:54.703707   16652 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:01:54.703733   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1024 19:01:54.764501   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:01:56.626742   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:01:58.708673   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:01:59.179931   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.71803685s)
	I1024 19:01:59.179998   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:01:59.179999   16652 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.706524506s)
	I1024 19:01:59.180015   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:01:59.180022   16652 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 19:01:59.180289   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:01:59.180303   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:01:59.180306   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:01:59.180321   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:01:59.180332   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:01:59.180692   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:01:59.180708   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:01:59.180722   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:00.233180   16652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1024 19:02:00.233220   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:02:00.236342   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.236796   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:02:00.236828   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.237041   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:02:00.237265   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:02:00.237451   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:02:00.237591   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:02:00.610490   16652 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1024 19:02:00.862168   16652 addons.go:231] Setting addon gcp-auth=true in "addons-866342"
	I1024 19:02:00.862234   16652 host.go:66] Checking if "addons-866342" exists ...
	I1024 19:02:00.862665   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:02:00.862710   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:02:00.878295   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I1024 19:02:00.878699   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:02:00.879272   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:02:00.879288   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:02:00.879574   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:02:00.880186   16652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:02:00.880234   16652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:02:00.920393   16652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
	I1024 19:02:00.920944   16652 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:02:00.921492   16652 main.go:141] libmachine: Using API Version  1
	I1024 19:02:00.921518   16652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:02:00.921804   16652 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:02:00.922017   16652 main.go:141] libmachine: (addons-866342) Calling .GetState
	I1024 19:02:00.923718   16652 main.go:141] libmachine: (addons-866342) Calling .DriverName
	I1024 19:02:00.923930   16652 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1024 19:02:00.923955   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHHostname
	I1024 19:02:00.927096   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.927566   16652 main.go:141] libmachine: (addons-866342) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:c1:28", ip: ""} in network mk-addons-866342: {Iface:virbr1 ExpiryTime:2023-10-24 20:01:10 +0000 UTC Type:0 Mac:52:54:00:26:c1:28 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:addons-866342 Clientid:01:52:54:00:26:c1:28}
	I1024 19:02:00.927600   16652 main.go:141] libmachine: (addons-866342) DBG | domain addons-866342 has defined IP address 192.168.39.163 and MAC address 52:54:00:26:c1:28 in network mk-addons-866342
	I1024 19:02:00.927773   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHPort
	I1024 19:02:00.927950   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHKeyPath
	I1024 19:02:00.928125   16652 main.go:141] libmachine: (addons-866342) Calling .GetSSHUsername
	I1024 19:02:00.928313   16652 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/addons-866342/id_rsa Username:docker}
	I1024 19:02:01.061409   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:01.847908   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.313037131s)
	I1024 19:02:01.847960   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.237348403s)
	I1024 19:02:01.847992   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848004   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848014   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.224273755s)
	I1024 19:02:01.848032   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848046   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.847962   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848081   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848115   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.2164986s)
	I1024 19:02:01.848131   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848142   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848230   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.942382392s)
	I1024 19:02:01.848346   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848285   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848288   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848381   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848389   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848354   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848436   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848391   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.91529199s)
	I1024 19:02:01.848496   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848499   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848302   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848323   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.936227839s)
	I1024 19:02:01.848509   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848520   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848528   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848531   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.753890601s)
	I1024 19:02:01.848255   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848548   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848551   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848557   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848561   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848566   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848395   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848599   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848608   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848607   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.695612886s)
	I1024 19:02:01.848623   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848631   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848759   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.469924232s)
	W1024 19:02:01.848794   16652 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:02:01.848818   16652 retry.go:31] will retry after 163.134961ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:02:01.848861   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848874   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848884   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848889   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.267091083s)
	I1024 19:02:01.848933   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848942   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848955   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.848894   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848966   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.848975   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.848986   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848475   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.849016   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.849027   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.849037   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.848956   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.849230   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.848422   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.849284   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.849253   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.849609   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.849619   16652 addons.go:467] Verifying addon metrics-server=true in "addons-866342"
	I1024 19:02:01.849941   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.849971   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.849970   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.849986   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.849990   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850002   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.850125   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850146   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850153   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.850160   16652 addons.go:467] Verifying addon ingress=true in "addons-866342"
	I1024 19:02:01.850174   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850195   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.850205   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.850213   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.853288   16652 out.go:177] * Verifying ingress addon...
	I1024 19:02:01.850258   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850278   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850294   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850315   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.850641   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.850803   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.851135   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.851276   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.852134   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.852160   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.853351   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.853362   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855110   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.855143   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.853372   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.853382   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855173   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.855181   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.855190   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.855192   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.853394   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.853998   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.854033   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855322   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855379   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.855403   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855411   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855446   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855452   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.855460   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855461   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.855487   16652 addons.go:467] Verifying addon registry=true in "addons-866342"
	I1024 19:02:01.857049   16652 out.go:177] * Verifying registry addon...
	I1024 19:02:01.855568   16652 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1024 19:02:01.859162   16652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1024 19:02:01.888788   16652 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1024 19:02:01.888808   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:01.897609   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.897626   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.897857   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.897876   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:01.897879   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	W1024 19:02:01.897968   16652 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1024 19:02:01.902519   16652 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:02:01.902547   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:01.912796   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:01.913870   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:01.913892   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:01.914163   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:01.914182   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:01.918195   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:02.013121   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:02:02.437278   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:02.477489   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:02.682036   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.917481006s)
	I1024 19:02:02.682090   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:02.682104   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:02.682042   16652 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.758089174s)
	I1024 19:02:02.683823   16652 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:02:02.682436   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:02.682507   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:02.685730   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:02.685749   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:02.687493   16652 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1024 19:02:02.685767   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:02.689021   16652 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1024 19:02:02.689037   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1024 19:02:02.689303   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:02.689323   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:02.689324   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:02.689343   16652 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-866342"
	I1024 19:02:02.691046   16652 out.go:177] * Verifying csi-hostpath-driver addon...
	I1024 19:02:02.693601   16652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1024 19:02:02.811003   16652 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1024 19:02:02.811025   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1024 19:02:02.862440   16652 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:02:02.862460   16652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1024 19:02:02.865285   16652 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:02:02.865312   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:02.912907   16652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:02:02.993525   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.030268   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:03.076563   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.395797   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:03.425517   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.431251   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.541523   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:03.917762   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.950850   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.040300   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:04.417362   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:04.439538   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.540895   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:04.749452   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.736272852s)
	I1024 19:02:04.749526   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:04.749542   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:04.749947   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:04.749957   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:04.749974   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:04.749992   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:04.750003   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:04.750290   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:04.750302   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:04.750318   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:04.965977   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.966313   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.104083   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:05.114436   16652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.201485104s)
	I1024 19:02:05.114500   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:05.114513   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:05.114807   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:05.114828   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:05.114830   16652 main.go:141] libmachine: (addons-866342) DBG | Closing plugin on server side
	I1024 19:02:05.114838   16652 main.go:141] libmachine: Making call to close driver server
	I1024 19:02:05.114846   16652 main.go:141] libmachine: (addons-866342) Calling .Close
	I1024 19:02:05.115070   16652 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:02:05.115083   16652 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:02:05.116499   16652 addons.go:467] Verifying addon gcp-auth=true in "addons-866342"
	I1024 19:02:05.118426   16652 out.go:177] * Verifying gcp-auth addon...
	I1024 19:02:05.120910   16652 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1024 19:02:05.154162   16652 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1024 19:02:05.154186   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.209236   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.417279   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.423518   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:05.537033   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:05.715339   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.880314   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:05.918162   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.922762   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:06.040050   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:06.215201   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:06.429982   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:06.444250   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:06.547959   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:06.713539   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:06.918324   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:06.922532   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:07.037463   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:07.212801   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:07.417560   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:07.423347   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:07.542324   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:07.712785   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:07.886198   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:07.921150   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:07.927858   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:08.036270   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:08.213193   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:08.418307   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:08.422769   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:08.536747   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:08.714031   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:08.917413   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:08.927188   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:09.063958   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:09.212839   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:09.418065   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:09.424123   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:09.539060   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:09.714877   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:09.917512   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:09.923418   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:10.052322   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:10.231780   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:10.380149   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:10.418386   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.422142   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:10.540361   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:10.713514   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:10.918376   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.926579   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:11.035912   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:11.213484   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:11.417742   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:11.423073   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:11.535361   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:11.715415   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:11.918839   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:11.925895   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:12.040734   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:12.215521   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:12.380314   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:12.417512   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.426004   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:12.537031   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:12.713565   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:12.919402   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.927491   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:13.037304   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:13.214135   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:13.418770   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:13.424893   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:13.542099   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:13.713443   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:13.921849   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:13.924209   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:14.040463   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:14.213632   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:14.383489   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:14.417814   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:14.424460   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:14.546417   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:14.715809   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:14.934758   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:14.938061   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:15.050659   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:15.213479   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:15.417436   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:15.458857   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:15.541519   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:15.713587   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:15.917868   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:15.927135   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:16.036944   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:16.214260   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:16.398875   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:16.417983   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:16.423399   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:16.538585   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:16.713329   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:16.917241   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:16.923402   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:17.037644   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:17.213378   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:17.417736   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.423799   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:17.536137   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:17.826939   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:17.920657   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.926501   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:18.041162   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:18.213941   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:18.418251   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:18.436346   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:18.539615   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:18.713194   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:18.893180   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:18.925770   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:18.926834   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.038477   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:19.218119   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:19.427276   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:19.437386   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.536629   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:19.713629   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:19.917773   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:19.923590   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:20.041243   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:20.213519   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:20.416740   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:20.423326   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:20.537892   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:20.713900   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:20.917772   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:20.923668   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:21.035534   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:21.213255   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:21.380083   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:21.417039   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:21.422884   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:21.542320   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:21.713736   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:21.918027   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:21.922687   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:22.036748   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:22.213795   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:22.417518   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:22.423216   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:22.543439   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:22.713699   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:22.918174   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:22.924831   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:23.052477   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:23.214315   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:23.390454   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:23.417732   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.423322   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:23.537346   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:23.716404   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:23.917611   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.922819   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:24.036229   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:24.213269   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:24.418736   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:24.425658   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:24.536225   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:24.713357   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:24.918002   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:24.923899   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:25.036029   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:25.213897   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:25.686652   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:25.698919   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:25.699388   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:25.701768   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:25.713207   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:25.917974   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:25.923112   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:26.036735   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:26.213558   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:26.417647   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:26.423088   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:26.537311   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:26.714160   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:26.918171   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:26.922736   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.036857   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:27.212573   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:27.417975   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:27.423202   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.536432   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:27.713895   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:27.880647   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:27.918500   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:27.927185   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:28.036938   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:28.217420   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:28.418329   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.422580   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:28.535922   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:28.712548   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:28.917697   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.923954   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:29.038110   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:29.213698   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:29.696670   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:29.696723   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:29.697700   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:29.714229   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:29.917572   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:29.923331   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.036975   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:30.213142   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:30.387201   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:30.418850   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:30.427790   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.536084   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:30.715112   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:30.917666   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:30.922831   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:31.041103   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:31.213055   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:31.416693   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:31.422979   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:31.536668   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:31.715288   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.046130   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:32.046557   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.049023   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:32.214019   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.416923   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.422050   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:32.537816   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:32.713259   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.881129   16652 pod_ready.go:102] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:32.920272   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.933083   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:33.036300   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:33.212961   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:33.418532   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:33.422756   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:33.535504   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:33.713030   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:33.916843   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:33.923034   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:34.036511   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:34.213662   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:34.418384   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:34.427395   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:34.536934   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:34.713273   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:34.918181   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:34.922593   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.035822   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:35.213825   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:35.383938   16652 pod_ready.go:92] pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.383966   16652 pod_ready.go:81] duration metric: took 40.828432492s waiting for pod "coredns-5dd5756b68-btn4f" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.383978   16652 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.389530   16652 pod_ready.go:92] pod "etcd-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.389555   16652 pod_ready.go:81] duration metric: took 5.568749ms waiting for pod "etcd-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.389566   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.398208   16652 pod_ready.go:92] pod "kube-apiserver-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.398229   16652 pod_ready.go:81] duration metric: took 8.655814ms waiting for pod "kube-apiserver-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.398241   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.405370   16652 pod_ready.go:92] pod "kube-controller-manager-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.405388   16652 pod_ready.go:81] duration metric: took 7.139653ms waiting for pod "kube-controller-manager-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.405399   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hz7fb" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.413033   16652 pod_ready.go:92] pod "kube-proxy-hz7fb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.413053   16652 pod_ready.go:81] duration metric: took 7.647033ms waiting for pod "kube-proxy-hz7fb" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.413063   16652 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.420604   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:35.423119   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.535965   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:35.714347   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:35.777563   16652 pod_ready.go:92] pod "kube-scheduler-addons-866342" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:35.777584   16652 pod_ready.go:81] duration metric: took 364.515224ms waiting for pod "kube-scheduler-addons-866342" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:35.777592   16652 pod_ready.go:38] duration metric: took 42.014959556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:02:35.777607   16652 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:02:35.777650   16652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:02:35.795226   16652 api_server.go:72] duration metric: took 42.351056782s to wait for apiserver process to appear ...
	I1024 19:02:35.795248   16652 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:02:35.795268   16652 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I1024 19:02:35.800170   16652 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I1024 19:02:35.801251   16652 api_server.go:141] control plane version: v1.28.3
	I1024 19:02:35.801269   16652 api_server.go:131] duration metric: took 6.015528ms to wait for apiserver health ...
	I1024 19:02:35.801276   16652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:02:35.920302   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:35.925142   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.992362   16652 system_pods.go:59] 18 kube-system pods found
	I1024 19:02:35.992397   16652 system_pods.go:61] "coredns-5dd5756b68-btn4f" [1a65ce1f-1502-4afb-9739-3ff39aa260e7] Running
	I1024 19:02:35.992405   16652 system_pods.go:61] "csi-hostpath-attacher-0" [b79df6c1-4d3c-4ca3-9ad0-d832297c94c9] Running
	I1024 19:02:35.992412   16652 system_pods.go:61] "csi-hostpath-resizer-0" [83c0bd57-8a4c-438a-b200-5b32f8e2c490] Running
	I1024 19:02:35.992423   16652 system_pods.go:61] "csi-hostpathplugin-2x7pp" [413ba041-ddcd-4b11-8908-3fbaaf9f9128] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1024 19:02:35.992434   16652 system_pods.go:61] "etcd-addons-866342" [7b00fcf3-3c2d-4fbf-90d8-f67cc1775321] Running
	I1024 19:02:35.992442   16652 system_pods.go:61] "kube-apiserver-addons-866342" [74168ee9-8de6-40b7-b5f6-f5df5a682a6f] Running
	I1024 19:02:35.992451   16652 system_pods.go:61] "kube-controller-manager-addons-866342" [43cfb66d-8302-46f0-9dcc-4f33a6f205ce] Running
	I1024 19:02:35.992461   16652 system_pods.go:61] "kube-ingress-dns-minikube" [5d55372e-c8e4-4e55-b251-9dad4fad9890] Running
	I1024 19:02:35.992467   16652 system_pods.go:61] "kube-proxy-hz7fb" [cd6d9bae-e261-4141-9430-b0bfaf748547] Running
	I1024 19:02:35.992474   16652 system_pods.go:61] "kube-scheduler-addons-866342" [84855ad7-d7ae-469a-b5cc-d6bff4f4d483] Running
	I1024 19:02:35.992493   16652 system_pods.go:61] "metrics-server-7c66d45ddc-r2sdc" [216942df-99c1-4c92-b8bd-f0594dbb6894] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:02:35.992505   16652 system_pods.go:61] "nvidia-device-plugin-daemonset-kcrfw" [56d67427-465c-406a-a425-3ded489815e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1024 19:02:35.992520   16652 system_pods.go:61] "registry-9fjkv" [16c9f9e1-0151-4045-bb71-6e31267e58df] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1024 19:02:35.992530   16652 system_pods.go:61] "registry-proxy-8jqwg" [bd54e9d3-a6ec-43ec-910e-38ddb0de2574] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1024 19:02:35.992541   16652 system_pods.go:61] "snapshot-controller-58dbcc7b99-5hc9g" [68ab6123-ccb9-4af7-aa9d-dc523a62522a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:35.992554   16652 system_pods.go:61] "snapshot-controller-58dbcc7b99-gdslt" [4ba3a215-6f34-45d8-90ab-e2823003d8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:35.992565   16652 system_pods.go:61] "storage-provisioner" [e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25] Running
	I1024 19:02:35.992577   16652 system_pods.go:61] "tiller-deploy-7b677967b9-mzrhm" [3653bdf1-8b0f-4839-abe0-48a7faadeb74] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1024 19:02:35.992589   16652 system_pods.go:74] duration metric: took 191.306726ms to wait for pod list to return data ...
	I1024 19:02:35.992601   16652 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:02:36.036363   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:36.178035   16652 default_sa.go:45] found service account: "default"
	I1024 19:02:36.178063   16652 default_sa.go:55] duration metric: took 185.451836ms for default service account to be created ...
	I1024 19:02:36.178074   16652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:02:36.214051   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:36.386279   16652 system_pods.go:86] 18 kube-system pods found
	I1024 19:02:36.386303   16652 system_pods.go:89] "coredns-5dd5756b68-btn4f" [1a65ce1f-1502-4afb-9739-3ff39aa260e7] Running
	I1024 19:02:36.386311   16652 system_pods.go:89] "csi-hostpath-attacher-0" [b79df6c1-4d3c-4ca3-9ad0-d832297c94c9] Running
	I1024 19:02:36.386319   16652 system_pods.go:89] "csi-hostpath-resizer-0" [83c0bd57-8a4c-438a-b200-5b32f8e2c490] Running
	I1024 19:02:36.386330   16652 system_pods.go:89] "csi-hostpathplugin-2x7pp" [413ba041-ddcd-4b11-8908-3fbaaf9f9128] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1024 19:02:36.386339   16652 system_pods.go:89] "etcd-addons-866342" [7b00fcf3-3c2d-4fbf-90d8-f67cc1775321] Running
	I1024 19:02:36.386346   16652 system_pods.go:89] "kube-apiserver-addons-866342" [74168ee9-8de6-40b7-b5f6-f5df5a682a6f] Running
	I1024 19:02:36.386354   16652 system_pods.go:89] "kube-controller-manager-addons-866342" [43cfb66d-8302-46f0-9dcc-4f33a6f205ce] Running
	I1024 19:02:36.386365   16652 system_pods.go:89] "kube-ingress-dns-minikube" [5d55372e-c8e4-4e55-b251-9dad4fad9890] Running
	I1024 19:02:36.386380   16652 system_pods.go:89] "kube-proxy-hz7fb" [cd6d9bae-e261-4141-9430-b0bfaf748547] Running
	I1024 19:02:36.386385   16652 system_pods.go:89] "kube-scheduler-addons-866342" [84855ad7-d7ae-469a-b5cc-d6bff4f4d483] Running
	I1024 19:02:36.386391   16652 system_pods.go:89] "metrics-server-7c66d45ddc-r2sdc" [216942df-99c1-4c92-b8bd-f0594dbb6894] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:02:36.386401   16652 system_pods.go:89] "nvidia-device-plugin-daemonset-kcrfw" [56d67427-465c-406a-a425-3ded489815e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1024 19:02:36.386410   16652 system_pods.go:89] "registry-9fjkv" [16c9f9e1-0151-4045-bb71-6e31267e58df] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1024 19:02:36.386423   16652 system_pods.go:89] "registry-proxy-8jqwg" [bd54e9d3-a6ec-43ec-910e-38ddb0de2574] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1024 19:02:36.386430   16652 system_pods.go:89] "snapshot-controller-58dbcc7b99-5hc9g" [68ab6123-ccb9-4af7-aa9d-dc523a62522a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:36.386436   16652 system_pods.go:89] "snapshot-controller-58dbcc7b99-gdslt" [4ba3a215-6f34-45d8-90ab-e2823003d8ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1024 19:02:36.386443   16652 system_pods.go:89] "storage-provisioner" [e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25] Running
	I1024 19:02:36.386449   16652 system_pods.go:89] "tiller-deploy-7b677967b9-mzrhm" [3653bdf1-8b0f-4839-abe0-48a7faadeb74] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1024 19:02:36.386457   16652 system_pods.go:126] duration metric: took 208.378217ms to wait for k8s-apps to be running ...
	I1024 19:02:36.386467   16652 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:02:36.386518   16652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:02:36.418215   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:36.421928   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:36.423932   16652 system_svc.go:56] duration metric: took 37.457738ms WaitForService to wait for kubelet.
	I1024 19:02:36.423955   16652 kubeadm.go:581] duration metric: took 42.979791904s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:02:36.423976   16652 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:02:36.536293   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:36.577172   16652 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:02:36.577210   16652 node_conditions.go:123] node cpu capacity is 2
	I1024 19:02:36.577227   16652 node_conditions.go:105] duration metric: took 153.243697ms to run NodePressure ...
	I1024 19:02:36.577240   16652 start.go:228] waiting for startup goroutines ...
	I1024 19:02:36.714402   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:36.917780   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:36.926665   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:37.036889   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:37.212776   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:37.418667   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:37.422752   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:37.537868   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:37.713726   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:37.918271   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:37.924942   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:38.051853   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:38.214497   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:38.417813   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.427548   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:38.538166   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:38.713592   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:38.918374   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.922554   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.037200   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:39.213712   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:39.675961   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:39.688465   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.695079   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:39.721448   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:39.918090   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:39.923736   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:40.036092   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:40.213916   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:40.418743   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.423471   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:40.546164   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:40.713035   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:40.917987   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.923506   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.037053   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:41.212886   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:41.418188   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:41.423095   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.536576   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:41.713868   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:41.966197   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.970098   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.037368   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:42.216305   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:42.427089   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:42.434357   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.539816   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:42.712698   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:42.921310   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.930358   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:43.041686   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:43.217962   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:43.421238   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:43.437048   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:43.540548   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:43.748146   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:43.921235   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:43.929090   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:44.036200   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:44.213309   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:44.418391   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:44.422633   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:44.535391   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:44.713732   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:44.918673   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:44.922846   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:45.040012   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:45.213397   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:45.419257   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:45.424171   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:45.538727   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:45.732256   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:45.922084   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:45.931229   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:46.039706   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:46.215376   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:46.420682   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:46.426421   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:46.537729   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:46.713543   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:46.918078   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:46.923807   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.035589   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:47.213955   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:47.420698   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:47.424302   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.547876   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:47.714275   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:47.917177   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:47.924169   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:48.036452   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:48.213946   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:48.418736   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:48.424966   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:48.535710   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:48.712784   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.200432   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.234130   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:49.234583   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:49.241466   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.417872   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.430379   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:49.537582   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:49.713974   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.917835   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.923848   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:50.035991   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:50.215873   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:50.422346   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:50.425613   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:50.536013   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:50.713117   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:50.918829   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:50.938113   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:51.061372   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:51.213876   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:51.420764   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:51.423973   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:51.538032   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:51.712957   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:51.918693   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:51.922954   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.262170   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:52.277774   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:52.420971   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:52.424809   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.536075   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:52.715513   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:52.917786   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:52.923215   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:53.037194   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:53.215066   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:53.418941   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.423076   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:53.539432   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:53.713611   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:53.917918   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.923926   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:54.035626   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:54.213934   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:54.419267   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:54.422105   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:54.536508   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:54.713503   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:54.919281   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:54.924619   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:55.037805   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:55.213794   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:55.421454   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:55.423781   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:55.542018   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:55.714798   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:55.918002   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:55.922425   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:56.053942   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:56.213864   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:56.419032   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:56.422356   16652 kapi.go:107] duration metric: took 54.563193433s to wait for kubernetes.io/minikube-addons=registry ...
	I1024 19:02:56.536473   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:56.713806   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:56.918175   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:57.040396   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:57.213662   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:57.418034   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:57.539708   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:57.714583   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:57.918341   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:58.036848   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:58.229433   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:58.421924   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:58.536871   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:58.714587   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:58.917702   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:59.053041   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:59.213286   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:59.422874   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:59.538558   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:59.714203   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:59.918291   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:00.039776   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:00.218290   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:00.418075   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:00.535802   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:00.715108   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:00.920184   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:01.046891   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:01.213360   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:01.418725   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:01.537912   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:01.713169   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:01.918253   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:02.037494   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:02.213073   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:02.422576   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:02.537430   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:02.712857   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:02.918457   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:03.043847   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:03.213681   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.495516   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:03.548869   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:03.713737   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.917682   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:04.036631   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:04.215949   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:04.427444   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:04.537751   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:04.713854   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:04.917459   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:05.036863   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:05.214829   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:05.418276   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:05.536067   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:05.714762   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:05.991126   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:06.040568   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:06.213387   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:06.418294   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:06.536385   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:06.714137   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:06.918268   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:07.035869   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:07.214541   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:07.424186   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:07.536378   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:07.714434   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:07.919043   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:08.038159   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:08.215308   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:08.417821   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:08.537153   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:08.712992   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:08.918106   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:09.037073   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:09.220768   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:09.428442   16652 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:09.554486   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:09.717677   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:09.918819   16652 kapi.go:107] duration metric: took 1m8.063251732s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1024 19:03:10.039349   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:10.213065   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:10.539937   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:10.723542   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:11.036629   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:11.215904   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:11.536474   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:11.714782   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:12.036547   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:12.214129   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:12.665187   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:12.733269   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:13.039766   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:13.223008   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:13.538492   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:13.713970   16652 kapi.go:107] duration metric: took 1m8.593057784s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1024 19:03:13.715591   16652 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-866342 cluster.
	I1024 19:03:13.716892   16652 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1024 19:03:13.718207   16652 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1024 19:03:14.036295   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:14.547611   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:15.037623   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:15.537008   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:16.050170   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:16.536127   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:17.037217   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:17.536392   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:18.036196   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:18.537571   16652 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:19.040535   16652 kapi.go:107] duration metric: took 1m16.346930681s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1024 19:03:19.042498   16652 out.go:177] * Enabled addons: ingress-dns, metrics-server, inspektor-gadget, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1024 19:03:19.043944   16652 addons.go:502] enable addons completed in 1m25.886460189s: enabled=[ingress-dns metrics-server inspektor-gadget nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1024 19:03:19.043978   16652 start.go:233] waiting for cluster config update ...
	I1024 19:03:19.043998   16652 start.go:242] writing updated cluster config ...
	I1024 19:03:19.044225   16652 ssh_runner.go:195] Run: rm -f paused
	I1024 19:03:19.092205   16652 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:03:19.094062   16652 out.go:177] * Done! kubectl is now configured to use "addons-866342" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:01:06 UTC, ends at Tue 2023-10-24 19:03:43 UTC. --
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.260843997Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:617" id=24eef1a5-42e5-4bea-a8f8-b7c3c7ec6fbb name=/runtime.v1.RuntimeService/ExecSync
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.260993601Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="go-grpc-middleware/chain.go:25" id=24eef1a5-42e5-4bea-a8f8-b7c3c7ec6fbb name=/runtime.v1.RuntimeService/ExecSync
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.271391683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=810bed7b-ed4e-4556-9d2e-bb019a4641c4 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.271442097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=810bed7b-ed4e-4556-9d2e-bb019a4641c4 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.272652598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e64fefcc-025f-4df0-a6ae-407ab2cb50b7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.273779974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698174223273762040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:453576,},InodesUsed:&UInt64Value{Value:198,},},},}" file="go-grpc-middleware/chain.go:25" id=e64fefcc-025f-4df0-a6ae-407ab2cb50b7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.274461779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=087ef11a-825e-47e0-90db-cefb0570418a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.274537500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=087ef11a-825e-47e0-90db-cefb0570418a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.275027873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9992934fbd57f8adb1a95f2bcf4cf3fd24bfe7de7d828f058e3dab5cccb5b291,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1698174198395697826,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: 2c8f64ac
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973e8eacf3cb2a46f4703cc5e5b2fb70451617deb57dabe93de6376194516ba7,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1698174196119646210,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 363c84d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a2e3b782c96f2a091e37312ce1300b3fece7dd1bff6a00fdb9ba1a78de74a35,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1698174194338523035,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-dd
cd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: c47d247e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feca886dd266473e4cbf063327aa57ac82978cf325f87dcf1669c0e0434a3d8,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1698174190886014588,Labels:map[string]string{io.kubernetes.container.name:
hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: e816e1d5,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0,PodSandboxId:06ead9376c4df0c511f0a3e1017d5323808175d8109dee01b1ee364b0a785757,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa21086733cdf56ae05e8f8546788,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa210
86733cdf56ae05e8f8546788,State:CONTAINER_RUNNING,CreatedAt:1698174189154996434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6f48fc54bd-vvrbc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85a4a208-cb9e-4c26-8a4f-f939c08527d3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a475b1d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:697f9b26d77e7b392e182ab38b6a086ee69fa1d0a388928f4eedaaa9e6a7b98b,Pod
SandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1698174180722738185,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: cc2e33dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:529f2f72378a9a0fe9f6015abf921c98dfdf8d52645a3916a867b98e6d4d41c4,PodSandboxId:61dba133f60f118f69e5f9898763bfa34bf897c12f9083c0969f5c14b1282fea,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1698174175331043861,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gdslt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba3a215-6f34-45d8-90ab-e2823003d8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 87718504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containe
rPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f60b8b7ef27b44fc06b48e1a30b25eeeee3d7720b456c8f36b44c49fee74e15,PodSandboxId:405061ff459505d39e3e0fd628903292f4f47f88415a4b88dac9c1a230e5f957,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1698174165387532275,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-5hc9g,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 68ab6123-ccb9-4af7-aa9d-dc523a62522a,},Annotations:map[string]string{io.kubernetes.container.hash: 778708c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-
patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7f7d2ab56f2dd0ccc80dce19ac2a6ed6b03ee6855f42e5dd95e407e2533816,PodSandboxId:98d3201ace32b1c05b0370d84e72adad11dec108227a143a562091fdba1d026d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136,Annotations:map[string]string{},},ImageRef:nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136,State:CONTAINER_RUNNING,CreatedAt:1698174163674787380,Labels:map[string]string{io.kubernetes.container.name: nvidia-device
-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-kcrfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d67427-465c-406a-a425-3ded489815e8,},Annotations:map[string]string{io.kubernetes.container.hash: 45c5235c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6706e308a7ccc0aae6da59537dcc38d12a592e6df17f2a2112a1d67a021c9240,PodSandboxId:11a45fbe69d2c2d27643f5afa84955ea29ae30f6600a5c761b8bda83de44b9a3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:169817415352582
5357,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-t5w8q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a935592b-64df-4d97-b96e-d6c95e226c43,},Annotations:map[string]string{io.kubernetes.container.hash: a0379d86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99459448b3fdc3274d474bca410a0052d7fa55e5a16ddfdddb90a6f60cc87591,PodSandboxId:7462a79bfac647146d898380ceb685d9b4fce22f8679e10a7cf582be77a5dbb7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae
8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1698174152159045738,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c0bd57-8a4c-438a-b200-5b32f8e2c490,},Annotations:map[string]string{io.kubernetes.container.hash: 5a1204c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288,PodSandboxId:115bf7b2f0e0fcaf1dc7a2e56b9ac0025d1cd6ee27c9b3b009f9af3caa475ae2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4a
be27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1698174150314734422,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55372e-c8e4-4e55-b251-9dad4fad9890,},Annotations:map[string]string{io.kubernetes.container.hash: 5e692dba,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ca1b64df363f154c4f031013d7c4ed423b1b8461a21ce91dd754003286c4b2,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-externa
l-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1698174143633114140,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: 5bf6caa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0f52676c1ff7b3990666caf9f4d7ba934afeee25cf4a6e04fc6be06bb1e7c,PodSandboxId:4f6689bec7ff592a9d67e27a92a0351bfb0743a45f1a05cbb479ec43de9b2dba,Met
adata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1698174141581410084,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79df6c1-4d3c-4ca3-9ad0-d832297c94c9,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0cd4b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a9
6026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c998593
2b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f21eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=087ef11a-825e-47e0-90db-cefb0570418a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.331901641Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:617" id=962faf05-854b-4df8-acc1-658d14d2cf5a name=/runtime.v1.RuntimeService/ExecSync
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.332031136Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="go-grpc-middleware/chain.go:25" id=962faf05-854b-4df8-acc1-658d14d2cf5a name=/runtime.v1.RuntimeService/ExecSync
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.340381359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7e887ec0-69c7-4002-93f1-865efc331de1 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.340433288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7e887ec0-69c7-4002-93f1-865efc331de1 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.341606511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=046e88eb-c199-4ea2-ab57-b5c7a3dd8c89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.342863644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698174223342846590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:453576,},InodesUsed:&UInt64Value{Value:198,},},},}" file="go-grpc-middleware/chain.go:25" id=046e88eb-c199-4ea2-ab57-b5c7a3dd8c89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.343556065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=24aeae84-b0c3-4387-bf6d-dd4d2c0fac82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.343660048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=24aeae84-b0c3-4387-bf6d-dd4d2c0fac82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.344213698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9992934fbd57f8adb1a95f2bcf4cf3fd24bfe7de7d828f058e3dab5cccb5b291,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1698174198395697826,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: 2c8f64ac
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973e8eacf3cb2a46f4703cc5e5b2fb70451617deb57dabe93de6376194516ba7,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1698174196119646210,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 363c84d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a2e3b782c96f2a091e37312ce1300b3fece7dd1bff6a00fdb9ba1a78de74a35,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1698174194338523035,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-dd
cd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: c47d247e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feca886dd266473e4cbf063327aa57ac82978cf325f87dcf1669c0e0434a3d8,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1698174190886014588,Labels:map[string]string{io.kubernetes.container.name:
hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: e816e1d5,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0,PodSandboxId:06ead9376c4df0c511f0a3e1017d5323808175d8109dee01b1ee364b0a785757,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa21086733cdf56ae05e8f8546788,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa210
86733cdf56ae05e8f8546788,State:CONTAINER_RUNNING,CreatedAt:1698174189154996434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6f48fc54bd-vvrbc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85a4a208-cb9e-4c26-8a4f-f939c08527d3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a475b1d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:697f9b26d77e7b392e182ab38b6a086ee69fa1d0a388928f4eedaaa9e6a7b98b,Pod
SandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1698174180722738185,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: cc2e33dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:529f2f72378a9a0fe9f6015abf921c98dfdf8d52645a3916a867b98e6d4d41c4,PodSandboxId:61dba133f60f118f69e5f9898763bfa34bf897c12f9083c0969f5c14b1282fea,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1698174175331043861,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gdslt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba3a215-6f34-45d8-90ab-e2823003d8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 87718504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containe
rPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f60b8b7ef27b44fc06b48e1a30b25eeeee3d7720b456c8f36b44c49fee74e15,PodSandboxId:405061ff459505d39e3e0fd628903292f4f47f88415a4b88dac9c1a230e5f957,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1698174165387532275,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-5hc9g,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 68ab6123-ccb9-4af7-aa9d-dc523a62522a,},Annotations:map[string]string{io.kubernetes.container.hash: 778708c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-
patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7f7d2ab56f2dd0ccc80dce19ac2a6ed6b03ee6855f42e5dd95e407e2533816,PodSandboxId:98d3201ace32b1c05b0370d84e72adad11dec108227a143a562091fdba1d026d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136,Annotations:map[string]string{},},ImageRef:nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136,State:CONTAINER_RUNNING,CreatedAt:1698174163674787380,Labels:map[string]string{io.kubernetes.container.name: nvidia-device
-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-kcrfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d67427-465c-406a-a425-3ded489815e8,},Annotations:map[string]string{io.kubernetes.container.hash: 45c5235c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6706e308a7ccc0aae6da59537dcc38d12a592e6df17f2a2112a1d67a021c9240,PodSandboxId:11a45fbe69d2c2d27643f5afa84955ea29ae30f6600a5c761b8bda83de44b9a3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:169817415352582
5357,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-t5w8q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a935592b-64df-4d97-b96e-d6c95e226c43,},Annotations:map[string]string{io.kubernetes.container.hash: a0379d86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99459448b3fdc3274d474bca410a0052d7fa55e5a16ddfdddb90a6f60cc87591,PodSandboxId:7462a79bfac647146d898380ceb685d9b4fce22f8679e10a7cf582be77a5dbb7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae
8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1698174152159045738,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c0bd57-8a4c-438a-b200-5b32f8e2c490,},Annotations:map[string]string{io.kubernetes.container.hash: 5a1204c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288,PodSandboxId:115bf7b2f0e0fcaf1dc7a2e56b9ac0025d1cd6ee27c9b3b009f9af3caa475ae2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4a
be27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1698174150314734422,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55372e-c8e4-4e55-b251-9dad4fad9890,},Annotations:map[string]string{io.kubernetes.container.hash: 5e692dba,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ca1b64df363f154c4f031013d7c4ed423b1b8461a21ce91dd754003286c4b2,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-externa
l-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1698174143633114140,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: 5bf6caa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0f52676c1ff7b3990666caf9f4d7ba934afeee25cf4a6e04fc6be06bb1e7c,PodSandboxId:4f6689bec7ff592a9d67e27a92a0351bfb0743a45f1a05cbb479ec43de9b2dba,Met
adata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1698174141581410084,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79df6c1-4d3c-4ca3-9ad0-d832297c94c9,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0cd4b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a9
6026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c998593
2b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f21eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=24aeae84-b0c3-4387-bf6d-dd4d2c0fac82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.403377830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6beacfa4-3d3a-4ec6-8141-1292af950554 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.403438584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6beacfa4-3d3a-4ec6-8141-1292af950554 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.411620202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4a55c7f9-46f8-4d01-9a81-0c2aa0c33d4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.412702644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698174223412686697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:453576,},InodesUsed:&UInt64Value{Value:198,},},},}" file="go-grpc-middleware/chain.go:25" id=4a55c7f9-46f8-4d01-9a81-0c2aa0c33d4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.413253443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cf67471-2c48-49b7-9998-cb79f17bead0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.413380014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cf67471-2c48-49b7-9998-cb79f17bead0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:03:43 addons-866342 crio[717]: time="2023-10-24 19:03:43.413939472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9992934fbd57f8adb1a95f2bcf4cf3fd24bfe7de7d828f058e3dab5cccb5b291,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1698174198395697826,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: 2c8f64ac
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973e8eacf3cb2a46f4703cc5e5b2fb70451617deb57dabe93de6376194516ba7,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1698174196119646210,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 363c84d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a2e3b782c96f2a091e37312ce1300b3fece7dd1bff6a00fdb9ba1a78de74a35,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1698174194338523035,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-dd
cd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: c47d247e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13,PodSandboxId:87cf6ad22715050a5364d24e370d7322a5616c05e5205c5dc52db6501826faa8,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1698174193269762382,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-rflxx,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: bf1a6b62-59ad-4fc2-b33b-94df7e8140c0,},Annotations:map[string]string{io.kubernetes.container.hash: 240fcf71,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feca886dd266473e4cbf063327aa57ac82978cf325f87dcf1669c0e0434a3d8,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1698174190886014588,Labels:map[string]string{io.kubernetes.container.name:
hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: e816e1d5,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e5a8f13c6029a8e463fbc83fbadfaf0091615033be89685c3c2458f257be0,PodSandboxId:06ead9376c4df0c511f0a3e1017d5323808175d8109dee01b1ee364b0a785757,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa21086733cdf56ae05e8f8546788,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa210
86733cdf56ae05e8f8546788,State:CONTAINER_RUNNING,CreatedAt:1698174189154996434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6f48fc54bd-vvrbc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85a4a208-cb9e-4c26-8a4f-f939c08527d3,},Annotations:map[string]string{io.kubernetes.container.hash: 6a475b1d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:697f9b26d77e7b392e182ab38b6a086ee69fa1d0a388928f4eedaaa9e6a7b98b,Pod
SandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1698174180722738185,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: cc2e33dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:529f2f72378a9a0fe9f6015abf921c98dfdf8d52645a3916a867b98e6d4d41c4,PodSandboxId:61dba133f60f118f69e5f9898763bfa34bf897c12f9083c0969f5c14b1282fea,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1698174175331043861,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gdslt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba3a215-6f34-45d8-90ab-e2823003d8ba,},Annotations:map[string]string{io.kubernetes.container.hash: 87718504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63f3bc75d05ef65349f00f594187ba1f6968cfb5198f93e8699836b4393ab737,PodSandboxId:46925b2fe4bed50071f15a273ddeeb171847d295d3c8a3b795f4d312c3fc4e04,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1698174170253677951,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-mzrhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3653bdf1-8b0f-4839-abe0-48a7faadeb74,},Annotations:map[string]string{io.kubernetes.container.hash: cec464a1,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containe
rPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174165653183225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.
kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f60b8b7ef27b44fc06b48e1a30b25eeeee3d7720b456c8f36b44c49fee74e15,PodSandboxId:405061ff459505d39e3e0fd628903292f4f47f88415a4b88dac9c1a230e5f957,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1698174165387532275,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-5hc9g,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 68ab6123-ccb9-4af7-aa9d-dc523a62522a,},Annotations:map[string]string{io.kubernetes.container.hash: 778708c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b0ea7a05d51aa922fff59f68c1ec691cc605cb123c645772f174e4d26cd7183,PodSandboxId:0e02be4f58dec58f29a5bbebf7dbc00500350df1045b84070bdd0254d9271ea1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174163783482331,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-
patch-cpn5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db328688-a88c-432e-88bc-3b2a4d39eded,},Annotations:map[string]string{io.kubernetes.container.hash: 13eabb71,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7f7d2ab56f2dd0ccc80dce19ac2a6ed6b03ee6855f42e5dd95e407e2533816,PodSandboxId:98d3201ace32b1c05b0370d84e72adad11dec108227a143a562091fdba1d026d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136,Annotations:map[string]string{},},ImageRef:nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136,State:CONTAINER_RUNNING,CreatedAt:1698174163674787380,Labels:map[string]string{io.kubernetes.container.name: nvidia-device
-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-kcrfw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d67427-465c-406a-a425-3ded489815e8,},Annotations:map[string]string{io.kubernetes.container.hash: 45c5235c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6706e308a7ccc0aae6da59537dcc38d12a592e6df17f2a2112a1d67a021c9240,PodSandboxId:11a45fbe69d2c2d27643f5afa84955ea29ae30f6600a5c761b8bda83de44b9a3,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:169817415352582
5357,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-t5w8q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a935592b-64df-4d97-b96e-d6c95e226c43,},Annotations:map[string]string{io.kubernetes.container.hash: a0379d86,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99459448b3fdc3274d474bca410a0052d7fa55e5a16ddfdddb90a6f60cc87591,PodSandboxId:7462a79bfac647146d898380ceb685d9b4fce22f8679e10a7cf582be77a5dbb7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae
8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1698174152159045738,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c0bd57-8a4c-438a-b200-5b32f8e2c490,},Annotations:map[string]string{io.kubernetes.container.hash: 5a1204c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db3a60377ca4e038ffcdb23116f604f481dd247a6921661577fe8132aee6288,PodSandboxId:115bf7b2f0e0fcaf1dc7a2e56b9ac0025d1cd6ee27c9b3b009f9af3caa475ae2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4a
be27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1698174150314734422,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55372e-c8e4-4e55-b251-9dad4fad9890,},Annotations:map[string]string{io.kubernetes.container.hash: 5e692dba,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ca1b64df363f154c4f031013d7c4ed423b1b8461a21ce91dd754003286c4b2,PodSandboxId:2c77f3759676fda038a2b4b6bb54dbd925a9afd690899682ccd6a1e4813d2f44,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-externa
l-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1698174143633114140,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2x7pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 413ba041-ddcd-4b11-8908-3fbaaf9f9128,},Annotations:map[string]string{io.kubernetes.container.hash: 5bf6caa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d0f52676c1ff7b3990666caf9f4d7ba934afeee25cf4a6e04fc6be06bb1e7c,PodSandboxId:4f6689bec7ff592a9d67e27a92a0351bfb0743a45f1a05cbb479ec43de9b2dba,Met
adata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1698174141581410084,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b79df6c1-4d3c-4ca3-9ad0-d832297c94c9,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0cd4b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7871dfa83671700f40e07b51783ea2706d230a528f970ad0379c4d1c7c62e9ab,PodSandboxId:08d6d1178815e0041733ec4254a7a9
6026aa797f7973538d774f099c6634af60,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1698174139920057708,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zp2f5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77b5c409-9bd7-4af3-bb7f-cc9c167c8911,},Annotations:map[string]string{io.kubernetes.container.hash: c877cfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d3a3fb0099c469a3fb79ee36b60c9fac70e8e089c998593
2b4c9b8b4f77bf2,PodSandboxId:36a21adc3933a6d86d31ae31065dd35af4936149e2667220261075be6b166170,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21,State:CONTAINER_RUNNING,CreatedAt:1698174138016904580,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-c4f8q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c7c62856-a84d-4c73-b4b5-ab373ec3b9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4dbf7d50,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9,PodSandboxId:80893d3050676a632f862939fda1b0607bfd15398d7b99bfe7a7b6fcd9aad8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698174131178422014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e76c081a-8dc7-4ce4-b1b6-6982e9cfcf25,},Annotations:map[string]string{io.kubernetes.container.hash: 2e0adb12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40,PodSandboxId:a108fe980cae30ea05622d76016d91c5e33da98fceea93a57e17b89a66880e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698174126318950701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hz7fb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6d9bae-e261-4141-9430-b0bfaf748547,},Annotations:map[string]string{io.kubernetes.container.hash: ab861489,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5,PodSandboxId:801b7e41ebced3ab192cb807e87eee2f10f052b16deabf1fac95c9532d9fa498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698174118722490945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-btn4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a65ce1f-1502-4afb-9739-3ff39aa260e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d2525,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af,PodSandboxId:9c0595b62da7813fe4b0abf24117bb0680ee92f31821000caa0e39bc0ccbaec2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698174093630625587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc1f72940e24a91b7afecd058f85cf6c,},Annotations:map[string]string{io.kubernetes.container.hash: a80d1a29,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe0e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a,PodSandboxId:e59a8c40c1a60f80b700b1d3b7530d87f51d59d21bdcb04dc91995f9649aa260,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698174093365778385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f32b306985ba0b85e27281d251aa310,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aac672c812a42e3f21eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9,PodSandboxId:c6314b2b790d469dfa8975fa9b0fc6315eba659a672ea559a6543321006d0d62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698174093265470811,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54266d083ecdf4ecb5e305fb10b9988a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13,PodSandboxId:88a89655a2dbc7902c96c1e04f566ee7a877bc09724c265d292ae921d5f2a22b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698174093154250826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-866342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bc5ece5f95f40c2404b74a679745064,},Annotations:map[string]string{io.kubernetes.container.hash: d952e5d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cf67471-2c48-49b7-9998-cb79f17bead0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	9992934fbd57f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          25 seconds ago       Running             csi-snapshotter                          0                   2c77f3759676f       csi-hostpathplugin-2x7pp
	973e8eacf3cb2       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          27 seconds ago       Running             csi-provisioner                          0                   2c77f3759676f       csi-hostpathplugin-2x7pp
	2a2e3b782c96f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            29 seconds ago       Running             liveness-probe                           0                   2c77f3759676f       csi-hostpathplugin-2x7pp
	06d5425187cdc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 30 seconds ago       Running             gcp-auth                                 0                   87cf6ad227150       gcp-auth-d4c87556c-rflxx
	3feca886dd266       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           32 seconds ago       Running             hostpath                                 0                   2c77f3759676f       csi-hostpathplugin-2x7pp
	a56e5a8f13c60       registry.k8s.io/ingress-nginx/controller@sha256:2648554ee53ec65a6095e00a53c89efae60aa21086733cdf56ae05e8f8546788                             34 seconds ago       Running             controller                               0                   06ead9376c4df       ingress-nginx-controller-6f48fc54bd-vvrbc
	697f9b26d77e7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                42 seconds ago       Running             node-driver-registrar                    0                   2c77f3759676f       csi-hostpathplugin-2x7pp
	529f2f72378a9       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      48 seconds ago       Running             volume-snapshot-controller               0                   61dba133f60f1       snapshot-controller-58dbcc7b99-gdslt
	63f3bc75d05ef       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  53 seconds ago       Running             tiller                                   0                   46925b2fe4bed       tiller-deploy-7b677967b9-mzrhm
	aef8fe13203e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             57 seconds ago       Running             storage-provisioner                      1                   80893d3050676       storage-provisioner
	9f60b8b7ef27b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   405061ff45950       snapshot-controller-58dbcc7b99-5hc9g
	8b0ea7a05d51a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   59 seconds ago       Exited              patch                                    0                   0e02be4f58dec       ingress-nginx-admission-patch-cpn5m
	cd7f7d2ab56f2       nvcr.io/nvidia/k8s-device-plugin@sha256:15c4280d13a61df703b12d1fd1b5b5eec4658157db3cb4b851d3259502310136                                     59 seconds ago       Running             nvidia-device-plugin-ctr                 0                   98d3201ace32b       nvidia-device-plugin-daemonset-kcrfw
	6706e308a7ccc       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   11a45fbe69d2c       local-path-provisioner-78b46b4d5c-t5w8q
	99459448b3fdc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   7462a79bfac64       csi-hostpath-resizer-0
	3db3a60377ca4       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   115bf7b2f0e0f       kube-ingress-dns-minikube
	d5ca1b64df363       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   2c77f3759676f       csi-hostpathplugin-2x7pp
	e3d0f52676c1f       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   4f6689bec7ff5       csi-hostpath-attacher-0
	7871dfa836717       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   About a minute ago   Exited              create                                   0                   08d6d1178815e       ingress-nginx-admission-create-zp2f5
	24d3a3fb0099c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:7b77d890d8e78c9e17981524c724331cc3547eab77adf32f4222c98167c7fd21                            About a minute ago   Running             gadget                                   0                   36a21adc3933a       gadget-c4f8q
	d53a67d397330       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Exited              storage-provisioner                      0                   80893d3050676       storage-provisioner
	3253511e1c28e       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                                             About a minute ago   Running             kube-proxy                               0                   a108fe980cae3       kube-proxy-hz7fb
	4d7766ee3fd77       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   801b7e41ebced       coredns-5dd5756b68-btn4f
	936bd9cee2edd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   9c0595b62da78       etcd-addons-866342
	fe0e879fa8539       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                                             2 minutes ago        Running             kube-scheduler                           0                   e59a8c40c1a60       kube-scheduler-addons-866342
	8aac672c812a4       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                                             2 minutes ago        Running             kube-controller-manager                  0                   c6314b2b790d4       kube-controller-manager-addons-866342
	2684a464c22a8       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                                             2 minutes ago        Running             kube-apiserver                           0                   88a89655a2dbc       kube-apiserver-addons-866342
	
	* 
	* ==> coredns [4d7766ee3fd77eed71755f7d7fdbb364fefc93d40d02b5811343b6396bdec5e5] <==
	* [INFO] 10.244.0.6:45339 - 51654 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156054s
	[INFO] 10.244.0.6:40781 - 28545 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076049s
	[INFO] 10.244.0.6:40781 - 38141 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.001674223s
	[INFO] 10.244.0.6:34074 - 33831 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060764s
	[INFO] 10.244.0.6:34074 - 65238 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062505s
	[INFO] 10.244.0.6:37006 - 11354 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076454s
	[INFO] 10.244.0.6:37006 - 52568 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072965s
	[INFO] 10.244.0.6:36559 - 54795 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095109s
	[INFO] 10.244.0.6:36559 - 53511 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000064423s
	[INFO] 10.244.0.6:50015 - 35787 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000196331s
	[INFO] 10.244.0.6:50015 - 8905 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000169411s
	[INFO] 10.244.0.6:47951 - 11934 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045869s
	[INFO] 10.244.0.6:47951 - 9884 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098304s
	[INFO] 10.244.0.6:60450 - 38153 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074006s
	[INFO] 10.244.0.6:60450 - 21271 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063887s
	[INFO] 10.244.0.20:52467 - 34449 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460102s
	[INFO] 10.244.0.20:58297 - 179 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159392s
	[INFO] 10.244.0.20:39044 - 53863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138179s
	[INFO] 10.244.0.20:55215 - 12268 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108806s
	[INFO] 10.244.0.20:42378 - 53725 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000303568s
	[INFO] 10.244.0.20:52652 - 24410 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000746993s
	[INFO] 10.244.0.20:37984 - 37086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00060007s
	[INFO] 10.244.0.20:50796 - 23696 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000731589s
	[INFO] 10.244.0.23:38343 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190314s
	[INFO] 10.244.0.23:44423 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094011s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-866342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-866342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=addons-866342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_01_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-866342
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-866342"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:01:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-866342
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:03:13 +0000   Tue, 24 Oct 2023 19:01:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:03:13 +0000   Tue, 24 Oct 2023 19:01:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:03:13 +0000   Tue, 24 Oct 2023 19:01:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:03:13 +0000   Tue, 24 Oct 2023 19:01:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    addons-866342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 2799a64041ca4d8881b5d53fbd221f45
	  System UUID:                2799a640-41ca-4d88-81b5-d53fbd221f45
	  Boot ID:                    e72a99ec-72e9-4002-ab8a-b128d71c8bda
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  gadget                      gadget-c4f8q                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  gcp-auth                    gcp-auth-d4c87556c-rflxx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  ingress-nginx               ingress-nginx-controller-6f48fc54bd-vvrbc    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         102s
	  kube-system                 coredns-5dd5756b68-btn4f                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     110s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 csi-hostpathplugin-2x7pp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 etcd-addons-866342                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-apiserver-addons-866342                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-addons-866342        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-hz7fb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-scheduler-addons-866342                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 nvidia-device-plugin-daemonset-kcrfw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 snapshot-controller-58dbcc7b99-5hc9g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 snapshot-controller-58dbcc7b99-gdslt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 tiller-deploy-7b677967b9-mzrhm               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  local-path-storage          local-path-provisioner-78b46b4d5c-t5w8q      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 93s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node addons-866342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node addons-866342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node addons-866342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node addons-866342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node addons-866342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node addons-866342 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m2s                   kubelet          Node addons-866342 status is now: NodeReady
	  Normal  RegisteredNode           111s                   node-controller  Node addons-866342 event: Registered Node addons-866342 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.097755] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct24 19:01] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.463431] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150613] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.058990] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.975190] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.102388] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.134915] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.114699] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.214536] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.225259] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.254807] systemd-fstab-generator[1245]: Ignoring "noauto" for root device
	[ +19.696121] kauditd_printk_skb: 10 callbacks suppressed
	[Oct24 19:02] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.017627] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.514980] kauditd_printk_skb: 4 callbacks suppressed
	[ +15.045801] kauditd_printk_skb: 18 callbacks suppressed
	[Oct24 19:03] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.005024] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.005027] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.537255] kauditd_printk_skb: 15 callbacks suppressed
	
	* 
	* ==> etcd [936bd9cee2edd8128f6f93e9bf47bb4b7a3a3137bddb0093af55bba76a2a39af] <==
	* {"level":"info","ts":"2023-10-24T19:02:49.186089Z","caller":"traceutil/trace.go:171","msg":"trace[1195952678] transaction","detail":"{read_only:false; response_revision:999; number_of_response:1; }","duration":"312.479943ms","start":"2023-10-24T19:02:48.873599Z","end":"2023-10-24T19:02:49.186079Z","steps":["trace[1195952678] 'process raft request'  (duration: 312.148867ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.186268Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T19:02:48.873586Z","time spent":"312.627543ms","remote":"127.0.0.1:49192","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":826,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-patch-cjr4z.17912068c59cea20\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-certs-patch-cjr4z.17912068c59cea20\" value_size:739 lease:5689325486867454225 >> failure:<>"}
	{"level":"warn","ts":"2023-10-24T19:02:49.187407Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.51914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2023-10-24T19:02:49.187502Z","caller":"traceutil/trace.go:171","msg":"trace[1039726344] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:999; }","duration":"276.74587ms","start":"2023-10-24T19:02:48.910746Z","end":"2023-10-24T19:02:49.187492Z","steps":["trace[1039726344] 'agreement among raft nodes before linearized reading'  (duration: 275.811698ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.199009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.96165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82237"}
	{"level":"info","ts":"2023-10-24T19:02:49.199098Z","caller":"traceutil/trace.go:171","msg":"trace[1078243644] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1000; }","duration":"171.058753ms","start":"2023-10-24T19:02:49.02803Z","end":"2023-10-24T19:02:49.199089Z","steps":["trace[1078243644] 'agreement among raft nodes before linearized reading'  (duration: 165.698624ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:02:49.20771Z","caller":"traceutil/trace.go:171","msg":"trace[963569469] transaction","detail":"{read_only:false; response_revision:1000; number_of_response:1; }","duration":"256.477498ms","start":"2023-10-24T19:02:48.951217Z","end":"2023-10-24T19:02:49.207694Z","steps":["trace[963569469] 'process raft request'  (duration: 236.86989ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.209715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.57384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82237"}
	{"level":"info","ts":"2023-10-24T19:02:49.209774Z","caller":"traceutil/trace.go:171","msg":"trace[1723567840] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1000; }","duration":"258.631845ms","start":"2023-10-24T19:02:48.951129Z","end":"2023-10-24T19:02:49.209761Z","steps":["trace[1723567840] 'agreement among raft nodes before linearized reading'  (duration: 238.132414ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:49.209246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.645506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:02:49.209983Z","caller":"traceutil/trace.go:171","msg":"trace[1453851684] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1000; }","duration":"248.393426ms","start":"2023-10-24T19:02:48.961582Z","end":"2023-10-24T19:02:49.209976Z","steps":["trace[1453851684] 'agreement among raft nodes before linearized reading'  (duration: 247.592324ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:02:52.255475Z","caller":"traceutil/trace.go:171","msg":"trace[1383206217] linearizableReadLoop","detail":"{readStateIndex:1053; appliedIndex:1052; }","duration":"226.249042ms","start":"2023-10-24T19:02:52.029212Z","end":"2023-10-24T19:02:52.255461Z","steps":["trace[1383206217] 'read index received'  (duration: 226.027303ms)","trace[1383206217] 'applied index is now lower than readState.Index'  (duration: 221.248µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:02:52.255645Z","caller":"traceutil/trace.go:171","msg":"trace[1250206044] transaction","detail":"{read_only:false; response_revision:1022; number_of_response:1; }","duration":"258.403206ms","start":"2023-10-24T19:02:51.997228Z","end":"2023-10-24T19:02:52.255631Z","steps":["trace[1250206044] 'process raft request'  (duration: 258.111178ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:52.255817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.705404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-24T19:02:52.257918Z","caller":"traceutil/trace.go:171","msg":"trace[2028574549] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1022; }","duration":"193.81294ms","start":"2023-10-24T19:02:52.064093Z","end":"2023-10-24T19:02:52.257906Z","steps":["trace[2028574549] 'agreement among raft nodes before linearized reading'  (duration: 191.679534ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:02:52.256187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.986748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82322"}
	{"level":"info","ts":"2023-10-24T19:02:52.258081Z","caller":"traceutil/trace.go:171","msg":"trace[338915624] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1022; }","duration":"228.890913ms","start":"2023-10-24T19:02:52.029183Z","end":"2023-10-24T19:02:52.258074Z","steps":["trace[338915624] 'agreement among raft nodes before linearized reading'  (duration: 226.825858ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:05.983702Z","caller":"traceutil/trace.go:171","msg":"trace[1268068637] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"106.772805ms","start":"2023-10-24T19:03:05.876916Z","end":"2023-10-24T19:03:05.983689Z","steps":["trace[1268068637] 'process raft request'  (duration: 106.349737ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:12.652441Z","caller":"traceutil/trace.go:171","msg":"trace[400807164] linearizableReadLoop","detail":"{readStateIndex:1145; appliedIndex:1144; }","duration":"141.056459ms","start":"2023-10-24T19:03:12.511372Z","end":"2023-10-24T19:03:12.652428Z","steps":["trace[400807164] 'read index received'  (duration: 140.781066ms)","trace[400807164] 'applied index is now lower than readState.Index'  (duration: 274.892µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:03:12.652683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.404305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-24T19:03:12.652938Z","caller":"traceutil/trace.go:171","msg":"trace[381011134] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1110; }","duration":"141.669387ms","start":"2023-10-24T19:03:12.511257Z","end":"2023-10-24T19:03:12.652926Z","steps":["trace[381011134] 'agreement among raft nodes before linearized reading'  (duration: 141.361002ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:03:12.653253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.44824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82490"}
	{"level":"info","ts":"2023-10-24T19:03:12.652774Z","caller":"traceutil/trace.go:171","msg":"trace[663327054] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"379.329217ms","start":"2023-10-24T19:03:12.273383Z","end":"2023-10-24T19:03:12.652713Z","steps":["trace[663327054] 'process raft request'  (duration: 378.812226ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:03:12.653548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T19:03:12.273368Z","time spent":"380.062475ms","remote":"127.0.0.1:49214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7530,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/addons-866342\" mod_revision:934 > success:<request_put:<key:\"/registry/minions/addons-866342\" value_size:7491 >> failure:<request_range:<key:\"/registry/minions/addons-866342\" > >"}
	{"level":"info","ts":"2023-10-24T19:03:12.653395Z","caller":"traceutil/trace.go:171","msg":"trace[471025193] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1110; }","duration":"124.651819ms","start":"2023-10-24T19:03:12.528734Z","end":"2023-10-24T19:03:12.653386Z","steps":["trace[471025193] 'agreement among raft nodes before linearized reading'  (duration: 124.346617ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [06d5425187cdc8a2d81e08f740c87b01b2fd4a24bcc8a077c6808ca1ae02db13] <==
	* 2023/10/24 19:03:13 GCP Auth Webhook started!
	2023/10/24 19:03:19 Ready to marshal response ...
	2023/10/24 19:03:19 Ready to write response ...
	2023/10/24 19:03:19 Ready to marshal response ...
	2023/10/24 19:03:19 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:42 Ready to marshal response ...
	2023/10/24 19:03:42 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:03:43 up 2 min,  0 users,  load average: 4.06, 2.41, 0.95
	Linux addons-866342 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2684a464c22a82bbd599a984372f9266e27dd5d50e488e7968af530e25b5af13] <==
	* I1024 19:02:02.119824       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1024 19:02:02.504110       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.181.0"}
	W1024 19:02:02.529747       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1024 19:02:03.625510       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 19:02:04.802978       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.198.69"}
	I1024 19:02:05.807963       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:02:10.808519       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:02:37.454468       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 19:02:43.661744       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 19:02:43.661865       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 19:02:43.663379       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1024 19:02:43.663538       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.222.23:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.222.23:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.222.23:443: connect: connection refused
	E1024 19:02:43.664011       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.222.23:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.222.23:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.222.23:443: connect: connection refused
	E1024 19:02:43.671115       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.222.23:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.222.23:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.222.23:443: connect: connection refused
	I1024 19:02:43.838072       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1024 19:03:30.686068       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1024 19:03:30.703577       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1024 19:03:30.729841       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1024 19:03:34.176845       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.163:8443->10.244.0.25:57760: read: connection reset by peer
	I1024 19:03:37.458895       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:03:41.914914       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1024 19:03:42.229027       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.40.203"}
	
	* 
	* ==> kube-controller-manager [8aac672c812a42e3f21eaf0d8a59b5f36ff8ed5775dfdbd7c64440cabd6777e9] <==
	* I1024 19:03:07.066881       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1024 19:03:07.083023       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1024 19:03:07.083484       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1024 19:03:09.482953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="97.742µs"
	I1024 19:03:13.550584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="12.70397ms"
	I1024 19:03:13.550694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="57.311µs"
	I1024 19:03:16.695420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="9.905508ms"
	I1024 19:03:16.695840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="348.621µs"
	I1024 19:03:19.732241       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1024 19:03:19.751036       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:03:19.751657       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:03:19.944981       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:03:20.032480       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1024 19:03:20.091194       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1024 19:03:22.427816       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:03:22.427877       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:03:23.908439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="26.634145ms"
	I1024 19:03:23.908969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="173.219µs"
	I1024 19:03:24.716234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-56665cdfc" duration="14.3µs"
	I1024 19:03:30.685718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.56µs"
	I1024 19:03:34.693424       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="14.145µs"
	I1024 19:03:37.010807       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1024 19:03:37.041905       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1024 19:03:37.428888       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:03:40.577519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="6.356µs"
	
	* 
	* ==> kube-proxy [3253511e1c28ee63a8c92adf0f3834ad2fe6d4d555ad22a26e09a3565d00ce40] <==
	* I1024 19:02:08.771158       1 server_others.go:69] "Using iptables proxy"
	I1024 19:02:08.940201       1 node.go:141] Successfully retrieved node IP: 192.168.39.163
	I1024 19:02:09.671563       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 19:02:09.671615       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 19:02:09.854441       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:02:09.854548       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:02:09.854718       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:02:09.854728       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:02:09.882402       1 config.go:188] "Starting service config controller"
	I1024 19:02:09.884531       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:02:09.884574       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:02:09.884591       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:02:09.894621       1 config.go:315] "Starting node config controller"
	I1024 19:02:09.894809       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:02:10.204722       1 shared_informer.go:318] Caches are synced for node config
	I1024 19:02:10.272414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:02:10.277064       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fe0e879fa8539e14a050e6f7b75fca822cd3f520881caa05765b33d76bc7ca3a] <==
	* E1024 19:01:37.640746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:01:37.640018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1024 19:01:37.640132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:01:37.640228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:37.640381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:01:37.640495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:01:37.640628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:37.640634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:01:38.491343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:38.491398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1024 19:01:38.635616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:01:38.635764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:01:38.745116       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:01:38.745206       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:01:38.780590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:01:38.780678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:01:38.783983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:01:38.784017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1024 19:01:38.810383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:01:38.810468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 19:01:38.846548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:01:38.846648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:01:38.852481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:01:38.852568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1024 19:01:41.807479       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:01:06 UTC, ends at Tue 2023-10-24 19:03:44 UTC. --
	Oct 24 19:03:41 addons-866342 kubelet[1252]: I1024 19:03:41.883207    1252 scope.go:117] "RemoveContainer" containerID="f45a75f792bfa4e39c7e0ee3f5551a92642103c75ba844ca258e56fcd459720f"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172739    1252 topology_manager.go:215] "Topology Admit Handler" podUID="f5245aa2-39c0-4f7b-917a-28296885d357" podNamespace="default" podName="nginx"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: E1024 19:03:42.172808    1252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="216942df-99c1-4c92-b8bd-f0594dbb6894" containerName="metrics-server"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: E1024 19:03:42.172818    1252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fca3774c-80d6-47c1-93e4-d8e1a5ef6b5d" containerName="helm-test"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: E1024 19:03:42.172827    1252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd54e9d3-a6ec-43ec-910e-38ddb0de2574" containerName="registry-proxy"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: E1024 19:03:42.172835    1252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bdb2f8f6-7af0-4311-8635-684c996e5143" containerName="helper-pod"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: E1024 19:03:42.172843    1252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2119ee2c-b73a-4baa-84b7-e66867a1bb46" containerName="registry-test"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: E1024 19:03:42.172851    1252 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16c9f9e1-0151-4045-bb71-6e31267e58df" containerName="registry"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172883    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="bdb2f8f6-7af0-4311-8635-684c996e5143" containerName="helper-pod"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172890    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="fca3774c-80d6-47c1-93e4-d8e1a5ef6b5d" containerName="helm-test"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172898    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="2119ee2c-b73a-4baa-84b7-e66867a1bb46" containerName="registry-test"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172905    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="216942df-99c1-4c92-b8bd-f0594dbb6894" containerName="metrics-server"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172911    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="216942df-99c1-4c92-b8bd-f0594dbb6894" containerName="metrics-server"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172917    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="bd54e9d3-a6ec-43ec-910e-38ddb0de2574" containerName="registry-proxy"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.172923    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="16c9f9e1-0151-4045-bb71-6e31267e58df" containerName="registry"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.205017    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/216942df-99c1-4c92-b8bd-f0594dbb6894-tmp-dir\") pod \"216942df-99c1-4c92-b8bd-f0594dbb6894\" (UID: \"216942df-99c1-4c92-b8bd-f0594dbb6894\") "
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.205094    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpb45\" (UniqueName: \"kubernetes.io/projected/216942df-99c1-4c92-b8bd-f0594dbb6894-kube-api-access-vpb45\") pod \"216942df-99c1-4c92-b8bd-f0594dbb6894\" (UID: \"216942df-99c1-4c92-b8bd-f0594dbb6894\") "
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.205710    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216942df-99c1-4c92-b8bd-f0594dbb6894-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "216942df-99c1-4c92-b8bd-f0594dbb6894" (UID: "216942df-99c1-4c92-b8bd-f0594dbb6894"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.213601    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216942df-99c1-4c92-b8bd-f0594dbb6894-kube-api-access-vpb45" (OuterVolumeSpecName: "kube-api-access-vpb45") pod "216942df-99c1-4c92-b8bd-f0594dbb6894" (UID: "216942df-99c1-4c92-b8bd-f0594dbb6894"). InnerVolumeSpecName "kube-api-access-vpb45". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.305916    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cjh\" (UniqueName: \"kubernetes.io/projected/f5245aa2-39c0-4f7b-917a-28296885d357-kube-api-access-k4cjh\") pod \"nginx\" (UID: \"f5245aa2-39c0-4f7b-917a-28296885d357\") " pod="default/nginx"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.305960    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5245aa2-39c0-4f7b-917a-28296885d357-gcp-creds\") pod \"nginx\" (UID: \"f5245aa2-39c0-4f7b-917a-28296885d357\") " pod="default/nginx"
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.305984    1252 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/216942df-99c1-4c92-b8bd-f0594dbb6894-tmp-dir\") on node \"addons-866342\" DevicePath \"\""
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.305995    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vpb45\" (UniqueName: \"kubernetes.io/projected/216942df-99c1-4c92-b8bd-f0594dbb6894-kube-api-access-vpb45\") on node \"addons-866342\" DevicePath \"\""
	Oct 24 19:03:42 addons-866342 kubelet[1252]: I1024 19:03:42.895638    1252 scope.go:117] "RemoveContainer" containerID="c6bab557d19aecff22ed73c30dc23079757b8653ad9175d7f9a72146e4ef1f3f"
	Oct 24 19:03:43 addons-866342 kubelet[1252]: I1024 19:03:43.058164    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="216942df-99c1-4c92-b8bd-f0594dbb6894" path="/var/lib/kubelet/pods/216942df-99c1-4c92-b8bd-f0594dbb6894/volumes"
	
	* 
	* ==> storage-provisioner [aef8fe13203e4a11604bdcb89c937b2cb59434e95ec1d8ec8358748d47ab2dec] <==
	* I1024 19:02:45.941903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:02:45.960370       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:02:45.960496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:02:45.974451       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:02:45.975132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-866342_b09f27bd-b684-4953-848e-949d7dd75a59!
	I1024 19:02:45.974559       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f49ec8cf-b9c0-4c3b-b848-b2be04049b0a", APIVersion:"v1", ResourceVersion:"971", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-866342_b09f27bd-b684-4953-848e-949d7dd75a59 became leader
	I1024 19:02:46.075313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-866342_b09f27bd-b684-4953-848e-949d7dd75a59!
	
	* 
	* ==> storage-provisioner [d53a67d397330438efea6123cf6942871d601269335e882400f80253b73792a9] <==
	* I1024 19:02:14.590000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 19:02:44.609185       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-866342 -n addons-866342
helpers_test.go:261: (dbg) Run:  kubectl --context addons-866342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-zp2f5 ingress-nginx-admission-patch-cpn5m
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-866342 describe pod nginx ingress-nginx-admission-create-zp2f5 ingress-nginx-admission-patch-cpn5m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-866342 describe pod nginx ingress-nginx-admission-create-zp2f5 ingress-nginx-admission-patch-cpn5m: exit status 1 (88.469649ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-866342/192.168.39.163
	Start Time:       Tue, 24 Oct 2023 19:03:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4cjh (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-k4cjh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/nginx to addons-866342
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zp2f5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cpn5m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-866342 describe pod nginx ingress-nginx-admission-create-zp2f5 ingress-nginx-admission-patch-cpn5m: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (8.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-866342
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-866342: exit status 82 (2m1.76654505s)

                                                
                                                
-- stdout --
	* Stopping node "addons-866342"  ...
	* Stopping node "addons-866342"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-866342" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-866342
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-866342: exit status 11 (21.486201215s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-866342" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-866342
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-866342: exit status 11 (6.143673463s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-866342" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-866342
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-866342: exit status 11 (6.144241119s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-866342" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-845802 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1024 19:16:02.947812   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-845802 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.589156682s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-845802 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-845802 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [238d799a-11dc-49ea-94eb-b98d29b3ceab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [238d799a-11dc-49ea-94eb-b98d29b3ceab] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.019529315s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1024 19:18:10.559048   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:10.564348   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:10.574607   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:10.594975   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:10.635206   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:10.715513   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:10.875942   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:11.196355   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:11.837273   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:13.117416   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:15.678513   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:19.103873   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:18:20.799129   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:18:31.040111   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-845802 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.792625052s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-845802 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.131
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons disable ingress-dns --alsologtostderr -v=1
E1024 19:18:46.791677   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons disable ingress-dns --alsologtostderr -v=1: (11.280404782s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons disable ingress --alsologtostderr -v=1
E1024 19:18:51.520530   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons disable ingress --alsologtostderr -v=1: (7.539360326s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-845802 -n ingress-addon-legacy-845802
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845802 logs -n 25: (1.1409701s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-853597                                                   | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-853597                                                   | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-853597                                                   | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-853597 ssh findmnt                                          | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-853597 ssh findmnt                                          | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| update-context | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| ssh            | functional-853597 ssh findmnt                                          | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| update-context | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| mount          | -p functional-853597                                                   | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-853597 ssh pgrep                                            | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-853597 image build -t                                       | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | localhost/my-image:functional-853597                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-853597                                                      | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-853597 image ls                                             | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:13 UTC | 24 Oct 23 19:13 UTC |
	| delete         | -p functional-853597                                                   | functional-853597           | jenkins | v1.31.2 | 24 Oct 23 19:14 UTC | 24 Oct 23 19:14 UTC |
	| start          | -p ingress-addon-legacy-845802                                         | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:14 UTC | 24 Oct 23 19:15 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-845802                                            | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:15 UTC | 24 Oct 23 19:15 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-845802                                            | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:15 UTC | 24 Oct 23 19:16 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-845802                                            | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:16 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-845802 ip                                         | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:18 UTC | 24 Oct 23 19:18 UTC |
	| addons         | ingress-addon-legacy-845802                                            | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:18 UTC | 24 Oct 23 19:18 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-845802                                            | ingress-addon-legacy-845802 | jenkins | v1.31.2 | 24 Oct 23 19:18 UTC | 24 Oct 23 19:18 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:14:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:14:04.759382   25059 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:14:04.759648   25059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:14:04.759657   25059 out.go:309] Setting ErrFile to fd 2...
	I1024 19:14:04.759662   25059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:14:04.759806   25059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:14:04.760342   25059 out.go:303] Setting JSON to false
	I1024 19:14:04.761114   25059 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3143,"bootTime":1698171702,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:14:04.761165   25059 start.go:138] virtualization: kvm guest
	I1024 19:14:04.763397   25059 out.go:177] * [ingress-addon-legacy-845802] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:14:04.764933   25059 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:14:04.766248   25059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:14:04.764978   25059 notify.go:220] Checking for updates...
	I1024 19:14:04.768976   25059 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:14:04.770544   25059 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:14:04.771961   25059 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:14:04.773290   25059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:14:04.774799   25059 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:14:04.807966   25059 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 19:14:04.809271   25059 start.go:298] selected driver: kvm2
	I1024 19:14:04.809288   25059 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:14:04.809324   25059 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:14:04.810041   25059 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:14:04.810117   25059 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:14:04.823556   25059 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:14:04.823588   25059 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:14:04.823792   25059 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:14:04.823859   25059 cni.go:84] Creating CNI manager for ""
	I1024 19:14:04.823875   25059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:14:04.823888   25059 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 19:14:04.823903   25059 start_flags.go:323] config:
	{Name:ingress-addon-legacy-845802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:14:04.824049   25059 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:14:04.825691   25059 out.go:177] * Starting control plane node ingress-addon-legacy-845802 in cluster ingress-addon-legacy-845802
	I1024 19:14:04.827036   25059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:14:04.848637   25059 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1024 19:14:04.848652   25059 cache.go:57] Caching tarball of preloaded images
	I1024 19:14:04.848767   25059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:14:04.850269   25059 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1024 19:14:04.851487   25059 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:14:04.883770   25059 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1024 19:14:08.527672   25059 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:14:08.527763   25059 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:14:09.507062   25059 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1024 19:14:09.507382   25059 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/config.json ...
	I1024 19:14:09.507410   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/config.json: {Name:mk802797858f7a1bc359bb1378746d0326688929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:09.507600   25059 start.go:365] acquiring machines lock for ingress-addon-legacy-845802: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:14:09.507639   25059 start.go:369] acquired machines lock for "ingress-addon-legacy-845802" in 20.188µs
	I1024 19:14:09.507667   25059 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-845802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:14:09.507752   25059 start.go:125] createHost starting for "" (driver="kvm2")
	I1024 19:14:09.509953   25059 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1024 19:14:09.510087   25059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:14:09.510118   25059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:14:09.523598   25059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I1024 19:14:09.524038   25059 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:14:09.524638   25059 main.go:141] libmachine: Using API Version  1
	I1024 19:14:09.524662   25059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:14:09.524977   25059 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:14:09.525130   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetMachineName
	I1024 19:14:09.525287   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:09.525461   25059 start.go:159] libmachine.API.Create for "ingress-addon-legacy-845802" (driver="kvm2")
	I1024 19:14:09.525483   25059 client.go:168] LocalClient.Create starting
	I1024 19:14:09.525524   25059 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem
	I1024 19:14:09.525587   25059 main.go:141] libmachine: Decoding PEM data...
	I1024 19:14:09.525627   25059 main.go:141] libmachine: Parsing certificate...
	I1024 19:14:09.525684   25059 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem
	I1024 19:14:09.525712   25059 main.go:141] libmachine: Decoding PEM data...
	I1024 19:14:09.525724   25059 main.go:141] libmachine: Parsing certificate...
	I1024 19:14:09.525738   25059 main.go:141] libmachine: Running pre-create checks...
	I1024 19:14:09.525749   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .PreCreateCheck
	I1024 19:14:09.526126   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetConfigRaw
	I1024 19:14:09.526470   25059 main.go:141] libmachine: Creating machine...
	I1024 19:14:09.526484   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .Create
	I1024 19:14:09.526616   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Creating KVM machine...
	I1024 19:14:09.528086   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found existing default KVM network
	I1024 19:14:09.528714   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:09.528597   25093 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I1024 19:14:09.533659   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | trying to create private KVM network mk-ingress-addon-legacy-845802 192.168.39.0/24...
	I1024 19:14:09.598727   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | private KVM network mk-ingress-addon-legacy-845802 192.168.39.0/24 created
	I1024 19:14:09.598766   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting up store path in /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802 ...
	I1024 19:14:09.598787   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:09.598678   25093 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:14:09.598804   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Building disk image from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:14:09.598829   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Downloading /home/jenkins/minikube-integration/17485-9023/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso...
	I1024 19:14:09.799680   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:09.799539   25093 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa...
	I1024 19:14:09.969192   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:09.969077   25093 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/ingress-addon-legacy-845802.rawdisk...
	I1024 19:14:09.969222   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Writing magic tar header
	I1024 19:14:09.969242   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Writing SSH key tar header
	I1024 19:14:09.969251   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:09.969195   25093 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802 ...
	I1024 19:14:09.969377   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802
	I1024 19:14:09.969413   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines
	I1024 19:14:09.969430   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802 (perms=drwx------)
	I1024 19:14:09.969458   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines (perms=drwxr-xr-x)
	I1024 19:14:09.969507   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube (perms=drwxr-xr-x)
	I1024 19:14:09.969539   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:14:09.969570   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023
	I1024 19:14:09.969586   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1024 19:14:09.969598   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023 (perms=drwxrwxr-x)
	I1024 19:14:09.969613   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1024 19:14:09.969631   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1024 19:14:09.969647   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Creating domain...
	I1024 19:14:09.969685   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home/jenkins
	I1024 19:14:09.969712   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Checking permissions on dir: /home
	I1024 19:14:09.969731   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Skipping /home - not owner
	I1024 19:14:09.970643   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) define libvirt domain using xml: 
	I1024 19:14:09.970674   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) <domain type='kvm'>
	I1024 19:14:09.970688   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <name>ingress-addon-legacy-845802</name>
	I1024 19:14:09.970699   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <memory unit='MiB'>4096</memory>
	I1024 19:14:09.970706   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <vcpu>2</vcpu>
	I1024 19:14:09.970712   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <features>
	I1024 19:14:09.970719   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <acpi/>
	I1024 19:14:09.970724   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <apic/>
	I1024 19:14:09.970730   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <pae/>
	I1024 19:14:09.970735   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     
	I1024 19:14:09.970743   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   </features>
	I1024 19:14:09.970753   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <cpu mode='host-passthrough'>
	I1024 19:14:09.970761   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   
	I1024 19:14:09.970766   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   </cpu>
	I1024 19:14:09.970774   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <os>
	I1024 19:14:09.970782   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <type>hvm</type>
	I1024 19:14:09.970790   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <boot dev='cdrom'/>
	I1024 19:14:09.970797   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <boot dev='hd'/>
	I1024 19:14:09.970804   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <bootmenu enable='no'/>
	I1024 19:14:09.970809   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   </os>
	I1024 19:14:09.970816   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   <devices>
	I1024 19:14:09.970821   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <disk type='file' device='cdrom'>
	I1024 19:14:09.970871   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/boot2docker.iso'/>
	I1024 19:14:09.970896   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <target dev='hdc' bus='scsi'/>
	I1024 19:14:09.970913   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <readonly/>
	I1024 19:14:09.970946   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </disk>
	I1024 19:14:09.970977   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <disk type='file' device='disk'>
	I1024 19:14:09.970994   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1024 19:14:09.971017   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/ingress-addon-legacy-845802.rawdisk'/>
	I1024 19:14:09.971033   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <target dev='hda' bus='virtio'/>
	I1024 19:14:09.971053   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </disk>
	I1024 19:14:09.971066   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <interface type='network'>
	I1024 19:14:09.971077   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <source network='mk-ingress-addon-legacy-845802'/>
	I1024 19:14:09.971083   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <model type='virtio'/>
	I1024 19:14:09.971109   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </interface>
	I1024 19:14:09.971129   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <interface type='network'>
	I1024 19:14:09.971141   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <source network='default'/>
	I1024 19:14:09.971154   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <model type='virtio'/>
	I1024 19:14:09.971164   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </interface>
	I1024 19:14:09.971173   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <serial type='pty'>
	I1024 19:14:09.971180   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <target port='0'/>
	I1024 19:14:09.971188   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </serial>
	I1024 19:14:09.971195   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <console type='pty'>
	I1024 19:14:09.971205   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <target type='serial' port='0'/>
	I1024 19:14:09.971216   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </console>
	I1024 19:14:09.971222   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     <rng model='virtio'>
	I1024 19:14:09.971236   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)       <backend model='random'>/dev/random</backend>
	I1024 19:14:09.971247   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     </rng>
	I1024 19:14:09.971257   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     
	I1024 19:14:09.971263   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)     
	I1024 19:14:09.971272   25059 main.go:141] libmachine: (ingress-addon-legacy-845802)   </devices>
	I1024 19:14:09.971278   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) </domain>
	I1024 19:14:09.971289   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) 
	I1024 19:14:09.975772   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:0f:62:c4 in network default
	I1024 19:14:09.976300   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Ensuring networks are active...
	I1024 19:14:09.976315   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:09.976890   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Ensuring network default is active
	I1024 19:14:09.977202   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Ensuring network mk-ingress-addon-legacy-845802 is active
	I1024 19:14:09.977654   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Getting domain xml...
	I1024 19:14:09.978353   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Creating domain...
	I1024 19:14:11.170406   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Waiting to get IP...
	I1024 19:14:11.171337   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:11.171704   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:11.171758   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:11.171700   25093 retry.go:31] will retry after 226.834179ms: waiting for machine to come up
	I1024 19:14:11.400081   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:11.400560   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:11.400588   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:11.400536   25093 retry.go:31] will retry after 377.577166ms: waiting for machine to come up
	I1024 19:14:11.780189   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:11.780627   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:11.780658   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:11.780600   25093 retry.go:31] will retry after 307.78542ms: waiting for machine to come up
	I1024 19:14:12.090119   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:12.090634   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:12.090661   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:12.090596   25093 retry.go:31] will retry after 552.286611ms: waiting for machine to come up
	I1024 19:14:12.644302   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:12.644818   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:12.644844   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:12.644797   25093 retry.go:31] will retry after 463.171469ms: waiting for machine to come up
	I1024 19:14:13.109013   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:13.109432   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:13.109469   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:13.109406   25093 retry.go:31] will retry after 847.644969ms: waiting for machine to come up
	I1024 19:14:13.958415   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:13.958791   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:13.958821   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:13.958738   25093 retry.go:31] will retry after 1.095347259s: waiting for machine to come up
	I1024 19:14:15.055317   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:15.055670   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:15.055700   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:15.055619   25093 retry.go:31] will retry after 1.146635156s: waiting for machine to come up
	I1024 19:14:16.203851   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:16.204278   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:16.204305   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:16.204221   25093 retry.go:31] will retry after 1.493105089s: waiting for machine to come up
	I1024 19:14:17.699824   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:17.700191   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:17.700217   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:17.700137   25093 retry.go:31] will retry after 1.512706122s: waiting for machine to come up
	I1024 19:14:19.214397   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:19.214792   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:19.214824   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:19.214755   25093 retry.go:31] will retry after 2.200892004s: waiting for machine to come up
	I1024 19:14:21.418643   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:21.419060   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:21.419093   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:21.419005   25093 retry.go:31] will retry after 3.395067086s: waiting for machine to come up
	I1024 19:14:24.815811   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:24.816222   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:24.816249   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:24.816181   25093 retry.go:31] will retry after 4.38224929s: waiting for machine to come up
	I1024 19:14:29.203547   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:29.203957   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find current IP address of domain ingress-addon-legacy-845802 in network mk-ingress-addon-legacy-845802
	I1024 19:14:29.203992   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | I1024 19:14:29.203925   25093 retry.go:31] will retry after 5.010359535s: waiting for machine to come up
	I1024 19:14:34.219397   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.219899   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Found IP for machine: 192.168.39.131
	I1024 19:14:34.219929   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has current primary IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.219937   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Reserving static IP address...
	I1024 19:14:34.220239   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-845802", mac: "52:54:00:c8:f2:8c", ip: "192.168.39.131"} in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.288045   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Getting to WaitForSSH function...
	I1024 19:14:34.288093   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Reserved static IP address: 192.168.39.131
	I1024 19:14:34.288109   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Waiting for SSH to be available...
	I1024 19:14:34.290904   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.291326   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.291346   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.291517   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Using SSH client type: external
	I1024 19:14:34.291544   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa (-rw-------)
	I1024 19:14:34.291587   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:14:34.291613   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | About to run SSH command:
	I1024 19:14:34.291650   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | exit 0
	I1024 19:14:34.388543   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | SSH cmd err, output: <nil>: 
	I1024 19:14:34.388809   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) KVM machine creation complete!
	I1024 19:14:34.389104   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetConfigRaw
	I1024 19:14:34.389599   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:34.389799   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:34.389971   25059 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1024 19:14:34.389984   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetState
	I1024 19:14:34.391097   25059 main.go:141] libmachine: Detecting operating system of created instance...
	I1024 19:14:34.391113   25059 main.go:141] libmachine: Waiting for SSH to be available...
	I1024 19:14:34.391123   25059 main.go:141] libmachine: Getting to WaitForSSH function...
	I1024 19:14:34.391136   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:34.393042   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.393325   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.393358   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.393433   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:34.393589   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.393756   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.393902   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:34.394030   25059 main.go:141] libmachine: Using SSH client type: native
	I1024 19:14:34.394376   25059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1024 19:14:34.394389   25059 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1024 19:14:34.511940   25059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:14:34.511964   25059 main.go:141] libmachine: Detecting the provisioner...
	I1024 19:14:34.511978   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:34.514375   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.514700   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.514729   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.514854   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:34.515056   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.515207   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.515345   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:34.515544   25059 main.go:141] libmachine: Using SSH client type: native
	I1024 19:14:34.515895   25059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1024 19:14:34.515913   25059 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1024 19:14:34.633411   25059 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g71212f5-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1024 19:14:34.633460   25059 main.go:141] libmachine: found compatible host: buildroot
	I1024 19:14:34.633468   25059 main.go:141] libmachine: Provisioning with buildroot...
	I1024 19:14:34.633476   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetMachineName
	I1024 19:14:34.633751   25059 buildroot.go:166] provisioning hostname "ingress-addon-legacy-845802"
	I1024 19:14:34.633773   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetMachineName
	I1024 19:14:34.633932   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:34.636168   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.636597   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.636617   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.636768   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:34.636933   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.637112   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.637259   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:34.637435   25059 main.go:141] libmachine: Using SSH client type: native
	I1024 19:14:34.637747   25059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1024 19:14:34.637761   25059 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-845802 && echo "ingress-addon-legacy-845802" | sudo tee /etc/hostname
	I1024 19:14:34.769657   25059 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-845802
	
	I1024 19:14:34.769682   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:34.772130   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.772476   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.772513   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.772671   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:34.772865   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.773035   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:34.773221   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:34.773376   25059 main.go:141] libmachine: Using SSH client type: native
	I1024 19:14:34.773699   25059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1024 19:14:34.773717   25059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-845802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-845802/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-845802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:14:34.900917   25059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:14:34.900947   25059 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:14:34.900993   25059 buildroot.go:174] setting up certificates
	I1024 19:14:34.901005   25059 provision.go:83] configureAuth start
	I1024 19:14:34.901034   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetMachineName
	I1024 19:14:34.901312   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetIP
	I1024 19:14:34.903706   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.904033   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.904062   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.904184   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:34.906550   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.906793   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:34.906836   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:34.906989   25059 provision.go:138] copyHostCerts
	I1024 19:14:34.907030   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:14:34.907063   25059 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:14:34.907079   25059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:14:34.907139   25059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:14:34.907209   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:14:34.907226   25059 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:14:34.907232   25059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:14:34.907254   25059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:14:34.907296   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:14:34.907315   25059 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:14:34.907325   25059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:14:34.907344   25059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:14:34.907389   25059 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-845802 san=[192.168.39.131 192.168.39.131 localhost 127.0.0.1 minikube ingress-addon-legacy-845802]
	I1024 19:14:35.439628   25059 provision.go:172] copyRemoteCerts
	I1024 19:14:35.439684   25059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:14:35.439705   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:35.442262   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.442564   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:35.442598   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.442756   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:35.442940   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:35.443190   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:35.443319   25059 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa Username:docker}
	I1024 19:14:35.530473   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:14:35.530531   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:14:35.552811   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:14:35.552866   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1024 19:14:35.574542   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:14:35.574594   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:14:35.595537   25059 provision.go:86] duration metric: configureAuth took 694.506902ms
	I1024 19:14:35.595557   25059 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:14:35.595745   25059 config.go:182] Loaded profile config "ingress-addon-legacy-845802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:14:35.595829   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:35.598571   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.598969   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:35.598997   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.599139   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:35.599321   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:35.599487   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:35.599624   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:35.599789   25059 main.go:141] libmachine: Using SSH client type: native
	I1024 19:14:35.600135   25059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1024 19:14:35.600158   25059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:14:35.903328   25059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:14:35.903358   25059 main.go:141] libmachine: Checking connection to Docker...
	I1024 19:14:35.903372   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetURL
	I1024 19:14:35.904547   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Using libvirt version 6000000
	I1024 19:14:35.906579   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.906915   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:35.906951   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.907114   25059 main.go:141] libmachine: Docker is up and running!
	I1024 19:14:35.907127   25059 main.go:141] libmachine: Reticulating splines...
	I1024 19:14:35.907134   25059 client.go:171] LocalClient.Create took 26.381644273s
	I1024 19:14:35.907152   25059 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-845802" took 26.381692114s
	I1024 19:14:35.907162   25059 start.go:300] post-start starting for "ingress-addon-legacy-845802" (driver="kvm2")
	I1024 19:14:35.907171   25059 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:14:35.907186   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:35.907454   25059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:14:35.907483   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:35.909798   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.910107   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:35.910139   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:35.910261   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:35.910455   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:35.910615   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:35.910747   25059 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa Username:docker}
	I1024 19:14:36.001951   25059 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:14:36.006271   25059 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:14:36.006291   25059 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:14:36.006365   25059 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:14:36.006442   25059 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:14:36.006456   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /etc/ssl/certs/162982.pem
	I1024 19:14:36.006554   25059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:14:36.014393   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:14:36.037360   25059 start.go:303] post-start completed in 130.187021ms
	I1024 19:14:36.037407   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetConfigRaw
	I1024 19:14:36.037964   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetIP
	I1024 19:14:36.040470   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.040801   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:36.040842   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.041034   25059 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/config.json ...
	I1024 19:14:36.041190   25059 start.go:128] duration metric: createHost completed in 26.533428361s
	I1024 19:14:36.041209   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:36.043205   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.043530   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:36.043560   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.043688   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:36.043857   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:36.044025   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:36.044164   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:36.044298   25059 main.go:141] libmachine: Using SSH client type: native
	I1024 19:14:36.044593   25059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I1024 19:14:36.044604   25059 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:14:36.161628   25059 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698174876.134355085
	
	I1024 19:14:36.161657   25059 fix.go:206] guest clock: 1698174876.134355085
	I1024 19:14:36.161668   25059 fix.go:219] Guest: 2023-10-24 19:14:36.134355085 +0000 UTC Remote: 2023-10-24 19:14:36.041200858 +0000 UTC m=+31.329069824 (delta=93.154227ms)
	I1024 19:14:36.161690   25059 fix.go:190] guest clock delta is within tolerance: 93.154227ms
	I1024 19:14:36.161697   25059 start.go:83] releasing machines lock for "ingress-addon-legacy-845802", held for 26.654044869s
	I1024 19:14:36.161730   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:36.161998   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetIP
	I1024 19:14:36.164455   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.164789   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:36.164817   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.164928   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:36.165410   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:36.165561   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:14:36.165671   25059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:14:36.165710   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:36.165778   25059 ssh_runner.go:195] Run: cat /version.json
	I1024 19:14:36.165815   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:14:36.167997   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.168207   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.168349   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:36.168379   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.168509   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:36.168507   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:36.168542   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:36.168655   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:36.168725   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:14:36.168801   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:36.168874   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:14:36.168939   25059 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa Username:docker}
	I1024 19:14:36.169019   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:14:36.169143   25059 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa Username:docker}
	I1024 19:14:36.253934   25059 ssh_runner.go:195] Run: systemctl --version
	I1024 19:14:36.276075   25059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:14:36.437528   25059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:14:36.443164   25059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:14:36.443225   25059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:14:36.458869   25059 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:14:36.458886   25059 start.go:472] detecting cgroup driver to use...
	I1024 19:14:36.458928   25059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:14:36.473370   25059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:14:36.484772   25059 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:14:36.484808   25059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:14:36.497280   25059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:14:36.509907   25059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:14:36.613445   25059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:14:36.732656   25059 docker.go:214] disabling docker service ...
	I1024 19:14:36.732731   25059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:14:36.746063   25059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:14:36.757076   25059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:14:36.866325   25059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:14:36.980545   25059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:14:36.992058   25059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:14:37.008945   25059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 19:14:37.008997   25059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:14:37.017616   25059 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:14:37.017669   25059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:14:37.026477   25059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:14:37.035250   25059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:14:37.044128   25059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:14:37.053124   25059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:14:37.060819   25059 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:14:37.060875   25059 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:14:37.072590   25059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:14:37.081911   25059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:14:37.193912   25059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:14:37.357607   25059 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:14:37.357668   25059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:14:37.365574   25059 start.go:540] Will wait 60s for crictl version
	I1024 19:14:37.365619   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:37.369403   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:14:37.407874   25059 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:14:37.407934   25059 ssh_runner.go:195] Run: crio --version
	I1024 19:14:37.456372   25059 ssh_runner.go:195] Run: crio --version
	I1024 19:14:37.501575   25059 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1024 19:14:37.502901   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetIP
	I1024 19:14:37.505590   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:37.505919   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:14:37.505945   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:14:37.506131   25059 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:14:37.510121   25059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:14:37.522092   25059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:14:37.522160   25059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:14:37.561607   25059 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:14:37.561669   25059 ssh_runner.go:195] Run: which lz4
	I1024 19:14:37.565843   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1024 19:14:37.565933   25059 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:14:37.570157   25059 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:14:37.570180   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1024 19:14:39.362191   25059 crio.go:444] Took 1.796289 seconds to copy over tarball
	I1024 19:14:39.362252   25059 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:14:42.603240   25059 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.240966756s)
	I1024 19:14:42.603265   25059 crio.go:451] Took 3.241054 seconds to extract the tarball
	I1024 19:14:42.603274   25059 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:14:42.648787   25059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:14:42.701557   25059 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:14:42.701583   25059 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 19:14:42.701637   25059 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:14:42.701683   25059 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:14:42.701706   25059 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:14:42.701731   25059 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:14:42.701752   25059 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:14:42.701690   25059 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:14:42.701905   25059 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1024 19:14:42.701906   25059 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1024 19:14:42.703043   25059 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:14:42.703060   25059 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1024 19:14:42.703075   25059 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:14:42.703085   25059 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:14:42.703097   25059 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:14:42.703111   25059 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:14:42.703156   25059 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1024 19:14:42.703042   25059 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:14:42.857678   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1024 19:14:42.859937   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:14:42.865829   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:14:42.866324   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:14:42.866849   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1024 19:14:42.874753   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:14:42.968416   25059 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1024 19:14:42.968455   25059 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:14:42.968492   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:42.980628   25059 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1024 19:14:42.980660   25059 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:14:42.980693   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:43.003315   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1024 19:14:43.006307   25059 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1024 19:14:43.006339   25059 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:14:43.006397   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:43.007974   25059 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1024 19:14:43.007999   25059 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:14:43.008029   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:43.019654   25059 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1024 19:14:43.019686   25059 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1024 19:14:43.019718   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:43.024079   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1024 19:14:43.024091   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:14:43.024242   25059 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1024 19:14:43.024273   25059 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:14:43.024312   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:43.090084   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:14:43.090158   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:14:43.090208   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1024 19:14:43.090218   25059 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1024 19:14:43.090249   25059 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1024 19:14:43.090273   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1024 19:14:43.090277   25059 ssh_runner.go:195] Run: which crictl
	I1024 19:14:43.122856   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1024 19:14:43.122921   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:14:43.177204   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1024 19:14:43.189363   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1024 19:14:43.189422   25059 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1024 19:14:43.189575   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1024 19:14:43.213436   25059 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:14:43.215082   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1024 19:14:43.253146   25059 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1024 19:14:43.378592   25059 cache_images.go:92] LoadImages completed in 676.991524ms
	W1024 19:14:43.378682   25059 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1024 19:14:43.378768   25059 ssh_runner.go:195] Run: crio config
	I1024 19:14:43.435880   25059 cni.go:84] Creating CNI manager for ""
	I1024 19:14:43.435903   25059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:14:43.435925   25059 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:14:43.435947   25059 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.131 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-845802 NodeName:ingress-addon-legacy-845802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 19:14:43.436119   25059 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.131
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-845802"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.131
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.131"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:14:43.436214   25059 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-845802 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:14:43.436273   25059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1024 19:14:43.445438   25059 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:14:43.445522   25059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:14:43.454283   25059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1024 19:14:43.470295   25059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1024 19:14:43.485846   25059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1024 19:14:43.501453   25059 ssh_runner.go:195] Run: grep 192.168.39.131	control-plane.minikube.internal$ /etc/hosts
	I1024 19:14:43.505214   25059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.131	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:14:43.517234   25059 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802 for IP: 192.168.39.131
	I1024 19:14:43.517264   25059 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.517439   25059 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:14:43.517490   25059 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:14:43.517559   25059 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.key
	I1024 19:14:43.517582   25059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt with IP's: []
	I1024 19:14:43.627951   25059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt ...
	I1024 19:14:43.627978   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: {Name:mk818b04a62d499f2d3a50cd12d249242dae19bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.628139   25059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.key ...
	I1024 19:14:43.628150   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.key: {Name:mkc9e1117248f2bfc8e3070028cf81ffc7d7bdef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.628220   25059 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key.28527ae8
	I1024 19:14:43.628234   25059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt.28527ae8 with IP's: [192.168.39.131 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:14:43.852095   25059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt.28527ae8 ...
	I1024 19:14:43.852124   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt.28527ae8: {Name:mk3b618076313c67190119d3a0eae3f82423fc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.852283   25059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key.28527ae8 ...
	I1024 19:14:43.852298   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key.28527ae8: {Name:mk2149f5669d7c0169d108f0eb0f13341c5342bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.852361   25059 certs.go:337] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt.28527ae8 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt
	I1024 19:14:43.852435   25059 certs.go:341] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key.28527ae8 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key
	I1024 19:14:43.852485   25059 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.key
	I1024 19:14:43.852503   25059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.crt with IP's: []
	I1024 19:14:43.932395   25059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.crt ...
	I1024 19:14:43.932423   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.crt: {Name:mk7c080a21cc31d10aefeca521175a47a34bf131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.932572   25059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.key ...
	I1024 19:14:43.932582   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.key: {Name:mk8fb3c9c86f0e9f3a7ce9d763aeba1de3d97dd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:14:43.932650   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:14:43.932667   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:14:43.932677   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:14:43.932687   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:14:43.932697   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:14:43.932710   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:14:43.932725   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:14:43.932742   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:14:43.932785   25059 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:14:43.932816   25059 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:14:43.932828   25059 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:14:43.932854   25059 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:14:43.932878   25059 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:14:43.932910   25059 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:14:43.932949   25059 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:14:43.932983   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:14:43.932998   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem -> /usr/share/ca-certificates/16298.pem
	I1024 19:14:43.933009   25059 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /usr/share/ca-certificates/162982.pem
	I1024 19:14:43.933605   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:14:43.959718   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1024 19:14:43.983083   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:14:44.006202   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 19:14:44.028848   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:14:44.050657   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:14:44.075278   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:14:44.097824   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:14:44.120488   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:14:44.142632   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:14:44.164649   25059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:14:44.186566   25059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:14:44.201802   25059 ssh_runner.go:195] Run: openssl version
	I1024 19:14:44.207177   25059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:14:44.216358   25059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:14:44.220705   25059 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:14:44.220754   25059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:14:44.226024   25059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:14:44.235766   25059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:14:44.245463   25059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:14:44.249991   25059 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:14:44.250042   25059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:14:44.255558   25059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:14:44.264968   25059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:14:44.274682   25059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:14:44.279400   25059 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:14:44.279462   25059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:14:44.284839   25059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:14:44.294244   25059 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:14:44.298611   25059 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:14:44.298669   25059 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-845802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:14:44.298760   25059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:14:44.298801   25059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:14:44.337545   25059 cri.go:89] found id: ""
	I1024 19:14:44.337623   25059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:14:44.346866   25059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:14:44.356016   25059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:14:44.365469   25059 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:14:44.365516   25059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 19:14:44.427208   25059 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1024 19:14:44.427609   25059 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:14:44.580205   25059 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:14:44.580362   25059 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:14:44.580485   25059 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:14:44.802530   25059 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:14:44.803605   25059 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:14:44.803707   25059 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:14:44.920212   25059 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:14:44.922083   25059 out.go:204]   - Generating certificates and keys ...
	I1024 19:14:44.922208   25059 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:14:44.922309   25059 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:14:45.095706   25059 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:14:45.179829   25059 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:14:45.254804   25059 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:14:45.531487   25059 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:14:45.701187   25059 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:14:45.701665   25059 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-845802 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	I1024 19:14:45.841662   25059 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:14:45.841953   25059 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-845802 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	I1024 19:14:45.996418   25059 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:14:46.081517   25059 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:14:46.198380   25059 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:14:46.198465   25059 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:14:46.343194   25059 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:14:46.588007   25059 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:14:46.767199   25059 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:14:46.810032   25059 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:14:46.810702   25059 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:14:46.812454   25059 out.go:204]   - Booting up control plane ...
	I1024 19:14:46.812550   25059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:14:46.816405   25059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:14:46.817420   25059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:14:46.821252   25059 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:14:46.822027   25059 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:14:56.321163   25059 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503700 seconds
	I1024 19:14:56.321337   25059 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:14:56.335264   25059 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:14:56.858548   25059 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:14:56.858734   25059 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-845802 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 19:14:57.370688   25059 kubeadm.go:322] [bootstrap-token] Using token: 4fsp3p.6gns79v05yq8komp
	I1024 19:14:57.372217   25059 out.go:204]   - Configuring RBAC rules ...
	I1024 19:14:57.372331   25059 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:14:57.379392   25059 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:14:57.397209   25059 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:14:57.401534   25059 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:14:57.405107   25059 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:14:57.409355   25059 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:14:57.422284   25059 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:14:57.672879   25059 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:14:57.799579   25059 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:14:57.802072   25059 kubeadm.go:322] 
	I1024 19:14:57.802157   25059 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:14:57.802171   25059 kubeadm.go:322] 
	I1024 19:14:57.802252   25059 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:14:57.802266   25059 kubeadm.go:322] 
	I1024 19:14:57.802299   25059 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:14:57.802394   25059 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:14:57.802464   25059 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:14:57.802474   25059 kubeadm.go:322] 
	I1024 19:14:57.802532   25059 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:14:57.802633   25059 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:14:57.802707   25059 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:14:57.802726   25059 kubeadm.go:322] 
	I1024 19:14:57.802838   25059 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:14:57.802904   25059 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:14:57.802910   25059 kubeadm.go:322] 
	I1024 19:14:57.802976   25059 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4fsp3p.6gns79v05yq8komp \
	I1024 19:14:57.803070   25059 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 19:14:57.803092   25059 kubeadm.go:322]     --control-plane 
	I1024 19:14:57.803098   25059 kubeadm.go:322] 
	I1024 19:14:57.803169   25059 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:14:57.803188   25059 kubeadm.go:322] 
	I1024 19:14:57.803268   25059 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4fsp3p.6gns79v05yq8komp \
	I1024 19:14:57.803368   25059 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:14:57.803514   25059 kubeadm.go:322] W1024 19:14:44.409391     962 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1024 19:14:57.803621   25059 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:14:57.803816   25059 kubeadm.go:322] W1024 19:14:46.800959     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:14:57.803962   25059 kubeadm.go:322] W1024 19:14:46.802025     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:14:57.803977   25059 cni.go:84] Creating CNI manager for ""
	I1024 19:14:57.803986   25059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:14:57.805756   25059 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:14:57.807139   25059 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:14:57.818262   25059 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:14:57.836649   25059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:14:57.836719   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:14:57.836762   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=ingress-addon-legacy-845802 minikube.k8s.io/updated_at=2023_10_24T19_14_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:14:58.221678   25059 ops.go:34] apiserver oom_adj: -16
	I1024 19:14:58.221784   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:14:58.417033   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:14:59.005698   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:14:59.505597   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:00.005305   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:00.506101   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:01.005275   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:01.505958   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:02.005958   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:02.505258   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:03.005828   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:03.505816   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:04.006110   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:04.505768   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:05.005462   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:05.505287   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:06.005200   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:06.505885   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:07.005113   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:07.505881   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:08.005159   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:08.505090   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:09.006083   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:09.505932   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:10.005915   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:10.505779   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:11.005711   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:11.506093   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:12.005685   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:12.505499   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:13.219733   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:13.505838   25059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:15:13.594567   25059 kubeadm.go:1081] duration metric: took 15.757902667s to wait for elevateKubeSystemPrivileges.
	I1024 19:15:13.594599   25059 kubeadm.go:406] StartCluster complete in 29.295936884s
	I1024 19:15:13.594620   25059 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:15:13.594729   25059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:15:13.595596   25059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:15:13.595827   25059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:15:13.595942   25059 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:15:13.596043   25059 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-845802"
	I1024 19:15:13.596063   25059 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-845802"
	I1024 19:15:13.596112   25059 host.go:66] Checking if "ingress-addon-legacy-845802" exists ...
	I1024 19:15:13.596013   25059 config.go:182] Loaded profile config "ingress-addon-legacy-845802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:15:13.596177   25059 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-845802"
	I1024 19:15:13.596200   25059 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-845802"
	I1024 19:15:13.596590   25059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:15:13.596510   25059 kapi.go:59] client config for ingress-addon-legacy-845802: &rest.Config{Host:"https://192.168.39.131:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:15:13.596590   25059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:15:13.596629   25059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:15:13.596631   25059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:15:13.597261   25059 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:15:13.611140   25059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I1024 19:15:13.611531   25059 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:15:13.611731   25059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35917
	I1024 19:15:13.612009   25059 main.go:141] libmachine: Using API Version  1
	I1024 19:15:13.612035   25059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:15:13.612088   25059 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:15:13.612334   25059 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:15:13.612574   25059 main.go:141] libmachine: Using API Version  1
	I1024 19:15:13.612600   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetState
	I1024 19:15:13.612635   25059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:15:13.612913   25059 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:15:13.613547   25059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:15:13.613584   25059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:15:13.615031   25059 kapi.go:59] client config for ingress-addon-legacy-845802: &rest.Config{Host:"https://192.168.39.131:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:15:13.615361   25059 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-845802"
	I1024 19:15:13.615404   25059 host.go:66] Checking if "ingress-addon-legacy-845802" exists ...
	I1024 19:15:13.615806   25059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:15:13.615844   25059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:15:13.627826   25059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I1024 19:15:13.628248   25059 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:15:13.628737   25059 main.go:141] libmachine: Using API Version  1
	I1024 19:15:13.628762   25059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:15:13.629074   25059 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:15:13.629241   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetState
	I1024 19:15:13.630049   25059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I1024 19:15:13.630430   25059 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:15:13.630890   25059 main.go:141] libmachine: Using API Version  1
	I1024 19:15:13.630913   25059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:15:13.631103   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:15:13.631220   25059 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:15:13.633216   25059 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:15:13.631477   25059 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-845802" context rescaled to 1 replicas
	I1024 19:15:13.631793   25059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:15:13.634704   25059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:15:13.634732   25059 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:15:13.636405   25059 out.go:177] * Verifying Kubernetes components...
	I1024 19:15:13.634793   25059 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:15:13.637784   25059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:15:13.637807   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:15:13.637838   25059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:15:13.641505   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:15:13.641949   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:15:13.642009   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:15:13.642211   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:15:13.642395   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:15:13.642583   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:15:13.642777   25059 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa Username:docker}
	I1024 19:15:13.649600   25059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I1024 19:15:13.650063   25059 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:15:13.650500   25059 main.go:141] libmachine: Using API Version  1
	I1024 19:15:13.650520   25059 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:15:13.650896   25059 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:15:13.651058   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetState
	I1024 19:15:13.652664   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .DriverName
	I1024 19:15:13.652927   25059 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:15:13.652941   25059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:15:13.652954   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHHostname
	I1024 19:15:13.656079   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:15:13.656505   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f2:8c", ip: ""} in network mk-ingress-addon-legacy-845802: {Iface:virbr1 ExpiryTime:2023-10-24 20:14:25 +0000 UTC Type:0 Mac:52:54:00:c8:f2:8c Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:ingress-addon-legacy-845802 Clientid:01:52:54:00:c8:f2:8c}
	I1024 19:15:13.656533   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | domain ingress-addon-legacy-845802 has defined IP address 192.168.39.131 and MAC address 52:54:00:c8:f2:8c in network mk-ingress-addon-legacy-845802
	I1024 19:15:13.656723   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHPort
	I1024 19:15:13.656920   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHKeyPath
	I1024 19:15:13.657097   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .GetSSHUsername
	I1024 19:15:13.657248   25059 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/ingress-addon-legacy-845802/id_rsa Username:docker}
	I1024 19:15:13.855199   25059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:15:13.881074   25059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:15:13.943684   25059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:15:13.944232   25059 kapi.go:59] client config for ingress-addon-legacy-845802: &rest.Config{Host:"https://192.168.39.131:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:15:13.944460   25059 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-845802" to be "Ready" ...
	I1024 19:15:13.984965   25059 node_ready.go:49] node "ingress-addon-legacy-845802" has status "Ready":"True"
	I1024 19:15:13.984992   25059 node_ready.go:38] duration metric: took 40.516389ms waiting for node "ingress-addon-legacy-845802" to be "Ready" ...
	I1024 19:15:13.985002   25059 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:15:14.005931   25059 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-djszs" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:14.970411   25059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.11517624s)
	I1024 19:15:14.970461   25059 main.go:141] libmachine: Making call to close driver server
	I1024 19:15:14.970475   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .Close
	I1024 19:15:14.970489   25059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.089380927s)
	I1024 19:15:14.970525   25059 main.go:141] libmachine: Making call to close driver server
	I1024 19:15:14.970548   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .Close
	I1024 19:15:14.970580   25059 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.026853816s)
	I1024 19:15:14.970605   25059 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 19:15:14.970853   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Closing plugin on server side
	I1024 19:15:14.970853   25059 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:15:14.970892   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Closing plugin on server side
	I1024 19:15:14.970902   25059 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:15:14.970917   25059 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:15:14.970931   25059 main.go:141] libmachine: Making call to close driver server
	I1024 19:15:14.970944   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .Close
	I1024 19:15:14.970977   25059 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:15:14.970993   25059 main.go:141] libmachine: Making call to close driver server
	I1024 19:15:14.971003   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .Close
	I1024 19:15:14.971220   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Closing plugin on server side
	I1024 19:15:14.971227   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Closing plugin on server side
	I1024 19:15:14.971233   25059 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:15:14.971251   25059 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:15:14.971262   25059 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:15:14.971272   25059 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:15:14.998309   25059 main.go:141] libmachine: Making call to close driver server
	I1024 19:15:14.998332   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) Calling .Close
	I1024 19:15:14.998632   25059 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:15:14.998652   25059 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:15:14.998677   25059 main.go:141] libmachine: (ingress-addon-legacy-845802) DBG | Closing plugin on server side
	I1024 19:15:15.000775   25059 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 19:15:15.002494   25059 addons.go:502] enable addons completed in 1.406549097s: enabled=[storage-provisioner default-storageclass]
	I1024 19:15:15.090496   25059 pod_ready.go:97] error getting pod "coredns-66bff467f8-djszs" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-djszs" not found
	I1024 19:15:15.090523   25059 pod_ready.go:81] duration metric: took 1.084554738s waiting for pod "coredns-66bff467f8-djszs" in "kube-system" namespace to be "Ready" ...
	E1024 19:15:15.090535   25059 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-djszs" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-djszs" not found
	I1024 19:15:15.090541   25059 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:17.109013   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:19.110697   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:21.609531   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:23.610476   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:26.109874   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:28.110454   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:30.610474   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:33.111574   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:35.112232   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:37.609216   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:39.609321   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:41.610671   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:44.110602   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:46.110691   25059 pod_ready.go:102] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"False"
	I1024 19:15:47.110756   25059 pod_ready.go:92] pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace has status "Ready":"True"
	I1024 19:15:47.110788   25059 pod_ready.go:81] duration metric: took 32.020235293s waiting for pod "coredns-66bff467f8-xv4wm" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.110801   25059 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.115861   25059 pod_ready.go:92] pod "etcd-ingress-addon-legacy-845802" in "kube-system" namespace has status "Ready":"True"
	I1024 19:15:47.115880   25059 pod_ready.go:81] duration metric: took 5.071902ms waiting for pod "etcd-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.115892   25059 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.120592   25059 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-845802" in "kube-system" namespace has status "Ready":"True"
	I1024 19:15:47.120609   25059 pod_ready.go:81] duration metric: took 4.710697ms waiting for pod "kube-apiserver-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.120616   25059 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.125192   25059 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-845802" in "kube-system" namespace has status "Ready":"True"
	I1024 19:15:47.125208   25059 pod_ready.go:81] duration metric: took 4.586487ms waiting for pod "kube-controller-manager-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.125217   25059 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ptcwq" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.131316   25059 pod_ready.go:92] pod "kube-proxy-ptcwq" in "kube-system" namespace has status "Ready":"True"
	I1024 19:15:47.131334   25059 pod_ready.go:81] duration metric: took 6.108832ms waiting for pod "kube-proxy-ptcwq" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.131343   25059 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.304758   25059 request.go:629] Waited for 173.351682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.131:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-845802
	I1024 19:15:47.504229   25059 request.go:629] Waited for 196.400036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.131:8443/api/v1/nodes/ingress-addon-legacy-845802
	I1024 19:15:47.507855   25059 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-845802" in "kube-system" namespace has status "Ready":"True"
	I1024 19:15:47.507877   25059 pod_ready.go:81] duration metric: took 376.525715ms waiting for pod "kube-scheduler-ingress-addon-legacy-845802" in "kube-system" namespace to be "Ready" ...
	I1024 19:15:47.507888   25059 pod_ready.go:38] duration metric: took 33.522875956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:15:47.507915   25059 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:15:47.507963   25059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:15:47.522046   25059 api_server.go:72] duration metric: took 33.887256686s to wait for apiserver process to appear ...
	I1024 19:15:47.522062   25059 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:15:47.522075   25059 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I1024 19:15:47.527120   25059 api_server.go:279] https://192.168.39.131:8443/healthz returned 200:
	ok
	I1024 19:15:47.528079   25059 api_server.go:141] control plane version: v1.18.20
	I1024 19:15:47.528103   25059 api_server.go:131] duration metric: took 6.034396ms to wait for apiserver health ...
	I1024 19:15:47.528112   25059 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:15:47.704559   25059 request.go:629] Waited for 176.388517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.131:8443/api/v1/namespaces/kube-system/pods
	I1024 19:15:47.711155   25059 system_pods.go:59] 7 kube-system pods found
	I1024 19:15:47.711193   25059 system_pods.go:61] "coredns-66bff467f8-xv4wm" [a8c5bf65-60c1-4065-b0b6-1367374a1a03] Running
	I1024 19:15:47.711200   25059 system_pods.go:61] "etcd-ingress-addon-legacy-845802" [91c2e1db-dbff-46dc-a736-b5a380f05176] Running
	I1024 19:15:47.711205   25059 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-845802" [20c3dbea-ba04-4423-bc58-169fc941aab3] Running
	I1024 19:15:47.711209   25059 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-845802" [dc918fb7-5399-4e6b-971f-b62b367af1d3] Running
	I1024 19:15:47.711217   25059 system_pods.go:61] "kube-proxy-ptcwq" [70b277ff-0386-4361-8d5f-47afb09c2c46] Running
	I1024 19:15:47.711221   25059 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-845802" [7485bc65-663e-4f2f-99fc-44e6f6454271] Running
	I1024 19:15:47.711225   25059 system_pods.go:61] "storage-provisioner" [1d7d8ff7-9af2-4691-a4bf-b3906012a327] Running
	I1024 19:15:47.711230   25059 system_pods.go:74] duration metric: took 183.113396ms to wait for pod list to return data ...
	I1024 19:15:47.711237   25059 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:15:47.904688   25059 request.go:629] Waited for 193.392639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.131:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:15:47.907502   25059 default_sa.go:45] found service account: "default"
	I1024 19:15:47.907522   25059 default_sa.go:55] duration metric: took 196.27987ms for default service account to be created ...
	I1024 19:15:47.907530   25059 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:15:48.103888   25059 request.go:629] Waited for 196.303687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.131:8443/api/v1/namespaces/kube-system/pods
	I1024 19:15:48.109975   25059 system_pods.go:86] 7 kube-system pods found
	I1024 19:15:48.110005   25059 system_pods.go:89] "coredns-66bff467f8-xv4wm" [a8c5bf65-60c1-4065-b0b6-1367374a1a03] Running
	I1024 19:15:48.110011   25059 system_pods.go:89] "etcd-ingress-addon-legacy-845802" [91c2e1db-dbff-46dc-a736-b5a380f05176] Running
	I1024 19:15:48.110017   25059 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-845802" [20c3dbea-ba04-4423-bc58-169fc941aab3] Running
	I1024 19:15:48.110021   25059 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-845802" [dc918fb7-5399-4e6b-971f-b62b367af1d3] Running
	I1024 19:15:48.110025   25059 system_pods.go:89] "kube-proxy-ptcwq" [70b277ff-0386-4361-8d5f-47afb09c2c46] Running
	I1024 19:15:48.110033   25059 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-845802" [7485bc65-663e-4f2f-99fc-44e6f6454271] Running
	I1024 19:15:48.110036   25059 system_pods.go:89] "storage-provisioner" [1d7d8ff7-9af2-4691-a4bf-b3906012a327] Running
	I1024 19:15:48.110042   25059 system_pods.go:126] duration metric: took 202.508433ms to wait for k8s-apps to be running ...
	I1024 19:15:48.110048   25059 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:15:48.110107   25059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:15:48.123360   25059 system_svc.go:56] duration metric: took 13.304245ms WaitForService to wait for kubelet.
	I1024 19:15:48.123379   25059 kubeadm.go:581] duration metric: took 34.488594257s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:15:48.123395   25059 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:15:48.304737   25059 request.go:629] Waited for 181.288611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.131:8443/api/v1/nodes
	I1024 19:15:48.309888   25059 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:15:48.309926   25059 node_conditions.go:123] node cpu capacity is 2
	I1024 19:15:48.309940   25059 node_conditions.go:105] duration metric: took 186.540185ms to run NodePressure ...
	I1024 19:15:48.309955   25059 start.go:228] waiting for startup goroutines ...
	I1024 19:15:48.309964   25059 start.go:233] waiting for cluster config update ...
	I1024 19:15:48.309977   25059 start.go:242] writing updated cluster config ...
	I1024 19:15:48.310268   25059 ssh_runner.go:195] Run: rm -f paused
	I1024 19:15:48.358718   25059 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1024 19:15:48.360175   25059 out.go:177] 
	W1024 19:15:48.361427   25059 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1024 19:15:48.362717   25059 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1024 19:15:48.364193   25059 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-845802" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:14:21 UTC, ends at Tue 2023-10-24 19:18:56 UTC. --
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.471740495Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175136471730353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=1f63944a-61e6-4623-9d05-2e632864f07b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.472370958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e71d771-061f-47be-869e-5d870f9f5d0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.472417366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e71d771-061f-47be-869e-5d870f9f5d0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.472704266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1192f736d3780878fc99145cf6eb766ad645f6803c649146aadb04e1f77f2462,PodSandboxId:4339c440071d6a589af312502ac17d836d778df6957bfb7fc0af36f2b083c639,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698175119805456596,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-xw6pj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d473442e-2879-4ce4-b145-f611fa8dd42c,},Annotations:map[string]string{io.kubernetes.container.hash: 76d0b73f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3881d20079bff8b373b9742ee0d0eb2cf5d68124f9d1ed463b684418aa63137,PodSandboxId:a186183c40da1774518d2b84e2717b44367f66c6dc1832800e55a261e20b8796,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174977460976513,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238d799a-11dc-49ea-94eb-b98d29b3ceab,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3c3c41fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14da0036aed4f27c4a3d782995bf5784832049911f164eefe5bfc4a4c4116df9,PodSandboxId:1ae39cb5902ad44baa6d863f67a7542cf9c9da79c508d3bacf5f0f31173c96eb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698174959630219179,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9wjt9,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14737d25-0058-48da-a733-daf3fc7f6867,},Annotations:map[string]string{io.kubernetes.container.hash: d60cb3fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c51617683b8d0d1fdd39411693084718e52599831a18907297215934018f230d,PodSandboxId:8efa47b086743afc309f71f551d58af41102f03fed8b64681d6d65a5cd6b6462,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951419463421,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b4drr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd,},Annotations:map[string]string{io.kubernetes.container.hash: 45425ef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647811ef1f63b0690985097bc03b7fa8888c80578602de1e17c9a6237a5041ad,PodSandboxId:5da9a1285b0a5e2d39795674c22a07b11ddeb8fdcac43050b02d22d3f1da4b85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951246977714,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xzh4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc3638d1-0e02-4edc-8e71-758ff4abb7cf,},Annotations:map[string]string{io.kubernetes.container.hash: 41f1c1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e531483bd252374e672bd94222f2a09d72d255ef82c303a19f516d737fbfb7,PodSandboxId:a8c5eafcd1d9db4405005ebcdf48f2adfb385cdced5120cc1a331eb9b1cd7896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174915795211148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7d8ff7-9af2-4691-a4bf-b3906012a327,},Annotations:map[string]string{io.kubernetes.container.hash: c38f1af2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735ff99c870bf4a3cbcb96f2e98e2515267570be0c88ac24f318dbfd47cd3bd9,PodSandboxId:f25e5e0e9aea8dcf8e614ea9fa625540a438339b610274e58bb4786028b8f577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698174915285294178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptcwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b277ff-0386-4361-8d5f-47afb09c2c46,},Annotations:map[string]string{io.kubernetes.container.hash: 5fdbd36f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b994d238a40d2a61780d6e75b9cad0c94bc23b1afd5f6b2aa7408c23b2ff1d,PodSandboxId:599578e2040fa25eadb8625b00ff0b1e53d7f0497180327d8db999762a022f44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698174914665168421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xv4wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c5bf65-60c1-4065-b0b6-1367374a1a03,},Annotations:map[string]string{io.kubernetes.container.hash: 75a139ee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae54563b97ef2834ab3cb16e95e987b6e63212d52fbec89c8343290262c3b6f,Pod
SandboxId:83bb0f70a7f511fe20a5c79e68dea855485f348ef16a7dfed0a314ca3e2b2d0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698174890421995744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9781e0982e84a2348737fe13287480dd,},Annotations:map[string]string{io.kubernetes.container.hash: e5a26dac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fd8c3edd245985de1d5d96341025e35b83e39528893c2919b8e474e9fb0a54,PodSandboxId:8c0888e62d4d9c61b3a0d451edd1861b91d4
153097a3d057c0ff144d74c8c3d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698174889197560122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36674ee96d01537da945e29f77aeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2f0ac9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e05be1927e82522044bbd0f41332943939987540c15b80f65977c7275ca33c8,PodSandboxId:cba161c3e5bbbab85ccd932e88a0a264b0af785a10
f3d8d15f8e364065cb57a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698174889179504674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51691a22c33858a05d0226e216ca8067d9812d9abbbd00f0299a1a94897b319,PodSandboxId:a79b758faa02
7a28eeb7e10bb213bd444011c338283da62f85c43c3f11d0932b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698174888940295870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e71d771-061f-47be-869e-5d870f9f5d0d name=/runtime.v1.RuntimeSer
vice/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.512303999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=450ed84f-19a4-4275-ba12-9bd35f1bbdf5 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.512359955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=450ed84f-19a4-4275-ba12-9bd35f1bbdf5 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.513504113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1b4bc83d-e8cc-4ae5-9ca8-aaf57750f7cf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.514062161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175136514047778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=1b4bc83d-e8cc-4ae5-9ca8-aaf57750f7cf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.514926616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e3d39ee6-fa52-40e1-ad92-6d4cc3558e7b name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.515006095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e3d39ee6-fa52-40e1-ad92-6d4cc3558e7b name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.515320412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1192f736d3780878fc99145cf6eb766ad645f6803c649146aadb04e1f77f2462,PodSandboxId:4339c440071d6a589af312502ac17d836d778df6957bfb7fc0af36f2b083c639,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698175119805456596,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-xw6pj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d473442e-2879-4ce4-b145-f611fa8dd42c,},Annotations:map[string]string{io.kubernetes.container.hash: 76d0b73f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3881d20079bff8b373b9742ee0d0eb2cf5d68124f9d1ed463b684418aa63137,PodSandboxId:a186183c40da1774518d2b84e2717b44367f66c6dc1832800e55a261e20b8796,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174977460976513,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238d799a-11dc-49ea-94eb-b98d29b3ceab,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3c3c41fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14da0036aed4f27c4a3d782995bf5784832049911f164eefe5bfc4a4c4116df9,PodSandboxId:1ae39cb5902ad44baa6d863f67a7542cf9c9da79c508d3bacf5f0f31173c96eb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698174959630219179,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9wjt9,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14737d25-0058-48da-a733-daf3fc7f6867,},Annotations:map[string]string{io.kubernetes.container.hash: d60cb3fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c51617683b8d0d1fdd39411693084718e52599831a18907297215934018f230d,PodSandboxId:8efa47b086743afc309f71f551d58af41102f03fed8b64681d6d65a5cd6b6462,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951419463421,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b4drr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd,},Annotations:map[string]string{io.kubernetes.container.hash: 45425ef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647811ef1f63b0690985097bc03b7fa8888c80578602de1e17c9a6237a5041ad,PodSandboxId:5da9a1285b0a5e2d39795674c22a07b11ddeb8fdcac43050b02d22d3f1da4b85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951246977714,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xzh4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc3638d1-0e02-4edc-8e71-758ff4abb7cf,},Annotations:map[string]string{io.kubernetes.container.hash: 41f1c1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e531483bd252374e672bd94222f2a09d72d255ef82c303a19f516d737fbfb7,PodSandboxId:a8c5eafcd1d9db4405005ebcdf48f2adfb385cdced5120cc1a331eb9b1cd7896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174915795211148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7d8ff7-9af2-4691-a4bf-b3906012a327,},Annotations:map[string]string{io.kubernetes.container.hash: c38f1af2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735ff99c870bf4a3cbcb96f2e98e2515267570be0c88ac24f318dbfd47cd3bd9,PodSandboxId:f25e5e0e9aea8dcf8e614ea9fa625540a438339b610274e58bb4786028b8f577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698174915285294178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptcwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b277ff-0386-4361-8d5f-47afb09c2c46,},Annotations:map[string]string{io.kubernetes.container.hash: 5fdbd36f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b994d238a40d2a61780d6e75b9cad0c94bc23b1afd5f6b2aa7408c23b2ff1d,PodSandboxId:599578e2040fa25eadb8625b00ff0b1e53d7f0497180327d8db999762a022f44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698174914665168421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xv4wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c5bf65-60c1-4065-b0b6-1367374a1a03,},Annotations:map[string]string{io.kubernetes.container.hash: 75a139ee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae54563b97ef2834ab3cb16e95e987b6e63212d52fbec89c8343290262c3b6f,Pod
SandboxId:83bb0f70a7f511fe20a5c79e68dea855485f348ef16a7dfed0a314ca3e2b2d0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698174890421995744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9781e0982e84a2348737fe13287480dd,},Annotations:map[string]string{io.kubernetes.container.hash: e5a26dac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fd8c3edd245985de1d5d96341025e35b83e39528893c2919b8e474e9fb0a54,PodSandboxId:8c0888e62d4d9c61b3a0d451edd1861b91d4
153097a3d057c0ff144d74c8c3d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698174889197560122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36674ee96d01537da945e29f77aeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2f0ac9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e05be1927e82522044bbd0f41332943939987540c15b80f65977c7275ca33c8,PodSandboxId:cba161c3e5bbbab85ccd932e88a0a264b0af785a10
f3d8d15f8e364065cb57a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698174889179504674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51691a22c33858a05d0226e216ca8067d9812d9abbbd00f0299a1a94897b319,PodSandboxId:a79b758faa02
7a28eeb7e10bb213bd444011c338283da62f85c43c3f11d0932b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698174888940295870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e3d39ee6-fa52-40e1-ad92-6d4cc3558e7b name=/runtime.v1.RuntimeSer
vice/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.556064719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=06beb7f9-36b0-4ac7-afc8-c4f416794a3f name=/runtime.v1.RuntimeService/Version
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.556151560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=06beb7f9-36b0-4ac7-afc8-c4f416794a3f name=/runtime.v1.RuntimeService/Version
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.557420078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6c519ffa-03db-43ba-aaa7-147d44fd2fab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.557985856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175136557967300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=6c519ffa-03db-43ba-aaa7-147d44fd2fab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.558747644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d507f888-05cb-4e1d-8eef-7fbe4cf45558 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.558819007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d507f888-05cb-4e1d-8eef-7fbe4cf45558 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.559225562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1192f736d3780878fc99145cf6eb766ad645f6803c649146aadb04e1f77f2462,PodSandboxId:4339c440071d6a589af312502ac17d836d778df6957bfb7fc0af36f2b083c639,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698175119805456596,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-xw6pj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d473442e-2879-4ce4-b145-f611fa8dd42c,},Annotations:map[string]string{io.kubernetes.container.hash: 76d0b73f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3881d20079bff8b373b9742ee0d0eb2cf5d68124f9d1ed463b684418aa63137,PodSandboxId:a186183c40da1774518d2b84e2717b44367f66c6dc1832800e55a261e20b8796,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174977460976513,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238d799a-11dc-49ea-94eb-b98d29b3ceab,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3c3c41fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14da0036aed4f27c4a3d782995bf5784832049911f164eefe5bfc4a4c4116df9,PodSandboxId:1ae39cb5902ad44baa6d863f67a7542cf9c9da79c508d3bacf5f0f31173c96eb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698174959630219179,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9wjt9,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14737d25-0058-48da-a733-daf3fc7f6867,},Annotations:map[string]string{io.kubernetes.container.hash: d60cb3fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c51617683b8d0d1fdd39411693084718e52599831a18907297215934018f230d,PodSandboxId:8efa47b086743afc309f71f551d58af41102f03fed8b64681d6d65a5cd6b6462,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951419463421,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b4drr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd,},Annotations:map[string]string{io.kubernetes.container.hash: 45425ef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647811ef1f63b0690985097bc03b7fa8888c80578602de1e17c9a6237a5041ad,PodSandboxId:5da9a1285b0a5e2d39795674c22a07b11ddeb8fdcac43050b02d22d3f1da4b85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951246977714,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xzh4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc3638d1-0e02-4edc-8e71-758ff4abb7cf,},Annotations:map[string]string{io.kubernetes.container.hash: 41f1c1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e531483bd252374e672bd94222f2a09d72d255ef82c303a19f516d737fbfb7,PodSandboxId:a8c5eafcd1d9db4405005ebcdf48f2adfb385cdced5120cc1a331eb9b1cd7896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174915795211148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7d8ff7-9af2-4691-a4bf-b3906012a327,},Annotations:map[string]string{io.kubernetes.container.hash: c38f1af2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735ff99c870bf4a3cbcb96f2e98e2515267570be0c88ac24f318dbfd47cd3bd9,PodSandboxId:f25e5e0e9aea8dcf8e614ea9fa625540a438339b610274e58bb4786028b8f577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698174915285294178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptcwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b277ff-0386-4361-8d5f-47afb09c2c46,},Annotations:map[string]string{io.kubernetes.container.hash: 5fdbd36f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b994d238a40d2a61780d6e75b9cad0c94bc23b1afd5f6b2aa7408c23b2ff1d,PodSandboxId:599578e2040fa25eadb8625b00ff0b1e53d7f0497180327d8db999762a022f44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698174914665168421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xv4wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c5bf65-60c1-4065-b0b6-1367374a1a03,},Annotations:map[string]string{io.kubernetes.container.hash: 75a139ee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae54563b97ef2834ab3cb16e95e987b6e63212d52fbec89c8343290262c3b6f,Pod
SandboxId:83bb0f70a7f511fe20a5c79e68dea855485f348ef16a7dfed0a314ca3e2b2d0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698174890421995744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9781e0982e84a2348737fe13287480dd,},Annotations:map[string]string{io.kubernetes.container.hash: e5a26dac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fd8c3edd245985de1d5d96341025e35b83e39528893c2919b8e474e9fb0a54,PodSandboxId:8c0888e62d4d9c61b3a0d451edd1861b91d4
153097a3d057c0ff144d74c8c3d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698174889197560122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36674ee96d01537da945e29f77aeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2f0ac9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e05be1927e82522044bbd0f41332943939987540c15b80f65977c7275ca33c8,PodSandboxId:cba161c3e5bbbab85ccd932e88a0a264b0af785a10
f3d8d15f8e364065cb57a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698174889179504674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51691a22c33858a05d0226e216ca8067d9812d9abbbd00f0299a1a94897b319,PodSandboxId:a79b758faa02
7a28eeb7e10bb213bd444011c338283da62f85c43c3f11d0932b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698174888940295870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d507f888-05cb-4e1d-8eef-7fbe4cf45558 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.592788615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a2ce527e-1673-41f2-b42b-ffbfa84bf6bd name=/runtime.v1.RuntimeService/Version
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.592924061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a2ce527e-1673-41f2-b42b-ffbfa84bf6bd name=/runtime.v1.RuntimeService/Version
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.594267354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5d152094-af60-47b2-91ee-ecc3fcde961b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.598089096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175136594776744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=5d152094-af60-47b2-91ee-ecc3fcde961b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.606506006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4818f29e-3af1-44bf-b334-bd3683db61e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.606614705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4818f29e-3af1-44bf-b334-bd3683db61e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:18:56 ingress-addon-legacy-845802 crio[720]: time="2023-10-24 19:18:56.606974244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1192f736d3780878fc99145cf6eb766ad645f6803c649146aadb04e1f77f2462,PodSandboxId:4339c440071d6a589af312502ac17d836d778df6957bfb7fc0af36f2b083c639,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1698175119805456596,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-xw6pj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d473442e-2879-4ce4-b145-f611fa8dd42c,},Annotations:map[string]string{io.kubernetes.container.hash: 76d0b73f,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3881d20079bff8b373b9742ee0d0eb2cf5d68124f9d1ed463b684418aa63137,PodSandboxId:a186183c40da1774518d2b84e2717b44367f66c6dc1832800e55a261e20b8796,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf,State:CONTAINER_RUNNING,CreatedAt:1698174977460976513,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238d799a-11dc-49ea-94eb-b98d29b3ceab,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3c3c41fb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14da0036aed4f27c4a3d782995bf5784832049911f164eefe5bfc4a4c4116df9,PodSandboxId:1ae39cb5902ad44baa6d863f67a7542cf9c9da79c508d3bacf5f0f31173c96eb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1698174959630219179,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-9wjt9,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 14737d25-0058-48da-a733-daf3fc7f6867,},Annotations:map[string]string{io.kubernetes.container.hash: d60cb3fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c51617683b8d0d1fdd39411693084718e52599831a18907297215934018f230d,PodSandboxId:8efa47b086743afc309f71f551d58af41102f03fed8b64681d6d65a5cd6b6462,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951419463421,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b4drr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd,},Annotations:map[string]string{io.kubernetes.container.hash: 45425ef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647811ef1f63b0690985097bc03b7fa8888c80578602de1e17c9a6237a5041ad,PodSandboxId:5da9a1285b0a5e2d39795674c22a07b11ddeb8fdcac43050b02d22d3f1da4b85,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1698174951246977714,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xzh4w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc3638d1-0e02-4edc-8e71-758ff4abb7cf,},Annotations:map[string]string{io.kubernetes.container.hash: 41f1c1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e531483bd252374e672bd94222f2a09d72d255ef82c303a19f516d737fbfb7,PodSandboxId:a8c5eafcd1d9db4405005ebcdf48f2adfb385cdced5120cc1a331eb9b1cd7896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698174915795211148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7d8ff7-9af2-4691-a4bf-b3906012a327,},Annotations:map[string]string{io.kubernetes.container.hash: c38f1af2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:735ff99c870bf4a3cbcb96f2e98e2515267570be0c88ac24f318dbfd47cd3bd9,PodSandboxId:f25e5e0e9aea8dcf8e614ea9fa625540a438339b610274e58bb4786028b8f577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1698174915285294178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ptcwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b277ff-0386-4361-8d5f-47afb09c2c46,},Annotations:map[string]string{io.kubernetes.container.hash: 5fdbd36f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b994d238a40d2a61780d6e75b9cad0c94bc23b1afd5f6b2aa7408c23b2ff1d,PodSandboxId:599578e2040fa25eadb8625b00ff0b1e53d7f0497180327d8db999762a022f44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1698174914665168421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xv4wm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c5bf65-60c1-4065-b0b6-1367374a1a03,},Annotations:map[string]string{io.kubernetes.container.hash: 75a139ee,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae54563b97ef2834ab3cb16e95e987b6e63212d52fbec89c8343290262c3b6f,Pod
SandboxId:83bb0f70a7f511fe20a5c79e68dea855485f348ef16a7dfed0a314ca3e2b2d0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1698174890421995744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9781e0982e84a2348737fe13287480dd,},Annotations:map[string]string{io.kubernetes.container.hash: e5a26dac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fd8c3edd245985de1d5d96341025e35b83e39528893c2919b8e474e9fb0a54,PodSandboxId:8c0888e62d4d9c61b3a0d451edd1861b91d4
153097a3d057c0ff144d74c8c3d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1698174889197560122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36674ee96d01537da945e29f77aeee8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c2f0ac9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e05be1927e82522044bbd0f41332943939987540c15b80f65977c7275ca33c8,PodSandboxId:cba161c3e5bbbab85ccd932e88a0a264b0af785a10
f3d8d15f8e364065cb57a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1698174889179504674,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51691a22c33858a05d0226e216ca8067d9812d9abbbd00f0299a1a94897b319,PodSandboxId:a79b758faa02
7a28eeb7e10bb213bd444011c338283da62f85c43c3f11d0932b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1698174888940295870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4818f29e-3af1-44bf-b334-bd3683db61e4 name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1192f736d3780       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            16 seconds ago      Running             hello-world-app           0                   4339c440071d6       hello-world-app-5f5d8b66bb-xw6pj
	e3881d20079bf       docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf                    2 minutes ago       Running             nginx                     0                   a186183c40da1       nginx
	14da0036aed4f       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   1ae39cb5902ad       ingress-nginx-controller-7fcf777cb7-9wjt9
	c51617683b8d0       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   8efa47b086743       ingress-nginx-admission-patch-b4drr
	647811ef1f63b       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   5da9a1285b0a5       ingress-nginx-admission-create-xzh4w
	43e531483bd25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   a8c5eafcd1d9d       storage-provisioner
	735ff99c870bf       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   f25e5e0e9aea8       kube-proxy-ptcwq
	64b994d238a40       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   599578e2040fa       coredns-66bff467f8-xv4wm
	dae54563b97ef       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   83bb0f70a7f51       etcd-ingress-addon-legacy-845802
	69fd8c3edd245       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   8c0888e62d4d9       kube-apiserver-ingress-addon-legacy-845802
	4e05be1927e82       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   cba161c3e5bbb       kube-controller-manager-ingress-addon-legacy-845802
	f51691a22c338       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   a79b758faa027       kube-scheduler-ingress-addon-legacy-845802
	
	* 
	* ==> coredns [64b994d238a40d2a61780d6e75b9cad0c94bc23b1afd5f6b2aa7408c23b2ff1d] <==
	* [INFO] 10.244.0.6:33462 - 15152 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000260988s
	[INFO] 10.244.0.6:56232 - 41504 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038233s
	[INFO] 10.244.0.6:33462 - 12722 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067542s
	[INFO] 10.244.0.6:33462 - 13743 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000263915s
	[INFO] 10.244.0.6:56232 - 34661 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000222513s
	[INFO] 10.244.0.6:33462 - 17948 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00015476s
	[INFO] 10.244.0.6:56232 - 24589 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086938s
	[INFO] 10.244.0.6:33462 - 7492 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000118915s
	[INFO] 10.244.0.6:56232 - 49815 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058715s
	[INFO] 10.244.0.6:33462 - 30376 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.004663354s
	[INFO] 10.244.0.6:56232 - 38874 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000167814s
	[INFO] 10.244.0.6:46509 - 39157 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062504s
	[INFO] 10.244.0.6:52256 - 7970 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000124668s
	[INFO] 10.244.0.6:52256 - 20166 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075449s
	[INFO] 10.244.0.6:46509 - 19716 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003186s
	[INFO] 10.244.0.6:46509 - 21651 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045135s
	[INFO] 10.244.0.6:52256 - 1913 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072373s
	[INFO] 10.244.0.6:46509 - 33367 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049083s
	[INFO] 10.244.0.6:52256 - 14911 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000104193s
	[INFO] 10.244.0.6:52256 - 17299 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063442s
	[INFO] 10.244.0.6:46509 - 27223 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076709s
	[INFO] 10.244.0.6:46509 - 52636 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073925s
	[INFO] 10.244.0.6:52256 - 23571 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033693s
	[INFO] 10.244.0.6:52256 - 17659 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031571s
	[INFO] 10.244.0.6:46509 - 48723 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057398s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-845802
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-845802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=ingress-addon-legacy-845802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_14_57_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:14:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-845802
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:16:28 +0000   Tue, 24 Oct 2023 19:14:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:16:28 +0000   Tue, 24 Oct 2023 19:14:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:16:28 +0000   Tue, 24 Oct 2023 19:14:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:16:28 +0000   Tue, 24 Oct 2023 19:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    ingress-addon-legacy-845802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 0456ce03b18442a1a6648c7b2655d3d3
	  System UUID:                0456ce03-b184-42a1-a664-8c7b2655d3d3
	  Boot ID:                    08121037-9360-48e8-9f68-84fdf0001d80
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-xw6pj                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 coredns-66bff467f8-xv4wm                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m43s
	  kube-system                 etcd-ingress-addon-legacy-845802                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-apiserver-ingress-addon-legacy-845802             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-845802    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-ptcwq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-scheduler-ingress-addon-legacy-845802             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m58s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m58s  kubelet     Node ingress-addon-legacy-845802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s  kubelet     Node ingress-addon-legacy-845802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s  kubelet     Node ingress-addon-legacy-845802 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m48s  kubelet     Node ingress-addon-legacy-845802 status is now: NodeReady
	  Normal  Starting                 3m41s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct24 19:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093536] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.393536] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.437355] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146536] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.025902] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.155328] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.105888] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.146251] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.110981] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.216241] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +7.716402] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +2.841411] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.792731] systemd-fstab-generator[1420]: Ignoring "noauto" for root device
	[Oct24 19:15] kauditd_printk_skb: 6 callbacks suppressed
	[ +32.942336] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.551835] kauditd_printk_skb: 10 callbacks suppressed
	[Oct24 19:16] kauditd_printk_skb: 3 callbacks suppressed
	[Oct24 19:18] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.013369] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [dae54563b97ef2834ab3cb16e95e987b6e63212d52fbec89c8343290262c3b6f] <==
	* 2023-10-24 19:14:50.537318 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-24 19:14:50.539580 I | etcdserver: 18e6d8b26c9b0c49 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/24 19:14:50 INFO: 18e6d8b26c9b0c49 switched to configuration voters=(1794359762391600201)
	2023-10-24 19:14:50.540038 I | etcdserver/membership: added member 18e6d8b26c9b0c49 [https://192.168.39.131:2380] to cluster 86e8c9f2bcca8a81
	2023-10-24 19:14:50.540572 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 19:14:50.540719 I | embed: listening for peers on 192.168.39.131:2380
	2023-10-24 19:14:50.540792 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/24 19:14:51 INFO: 18e6d8b26c9b0c49 is starting a new election at term 1
	raft2023/10/24 19:14:51 INFO: 18e6d8b26c9b0c49 became candidate at term 2
	raft2023/10/24 19:14:51 INFO: 18e6d8b26c9b0c49 received MsgVoteResp from 18e6d8b26c9b0c49 at term 2
	raft2023/10/24 19:14:51 INFO: 18e6d8b26c9b0c49 became leader at term 2
	raft2023/10/24 19:14:51 INFO: raft.node: 18e6d8b26c9b0c49 elected leader 18e6d8b26c9b0c49 at term 2
	2023-10-24 19:14:51.427356 I | etcdserver: published {Name:ingress-addon-legacy-845802 ClientURLs:[https://192.168.39.131:2379]} to cluster 86e8c9f2bcca8a81
	2023-10-24 19:14:51.427398 I | embed: ready to serve client requests
	2023-10-24 19:14:51.428084 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-24 19:14:51.428332 I | embed: ready to serve client requests
	2023-10-24 19:14:51.428805 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-24 19:14:51.431331 I | embed: serving client requests on 192.168.39.131:2379
	2023-10-24 19:14:51.434170 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-24 19:14:51.434263 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-24 19:15:13.201676 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (504.83259ms) to execute
	2023-10-24 19:15:13.202457 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 " with result "range_response_count:7 size:5026" took too long (454.409779ms) to execute
	2023-10-24 19:15:14.694131 W | etcdserver: request "header:<ID:885392059550277742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bff467f8-djszs.1791211650d712d4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bff467f8-djszs.1791211650d712d4\" value_size:746 lease:885392059550277434 >> failure:<>>" with result "size:16" took too long (114.913772ms) to execute
	2023-10-24 19:16:03.915720 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (185.449008ms) to execute
	2023-10-24 19:16:03.916073 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true " with result "range_response_count:0 size:7" took too long (515.611124ms) to execute
	
	* 
	* ==> kernel <==
	*  19:18:56 up 4 min,  0 users,  load average: 0.53, 0.71, 0.34
	Linux ingress-addon-legacy-845802 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [69fd8c3edd245985de1d5d96341025e35b83e39528893c2919b8e474e9fb0a54] <==
	* W1024 19:14:55.967611       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.131]
	I1024 19:14:55.968357       1 controller.go:609] quota admission added evaluator for: endpoints
	I1024 19:14:55.974497       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:14:56.656602       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1024 19:14:57.646750       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1024 19:14:57.778079       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1024 19:14:58.238539       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:15:12.863712       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1024 19:15:12.880664       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1024 19:15:13.201225       1 trace.go:116] Trace[594609302]: "GuaranteedUpdate etcd3" type:*certificates.CertificateSigningRequest (started: 2023-10-24 19:15:12.678220515 +0000 UTC m=+23.316680651) (total time: 522.985234ms):
	Trace[594609302]: [522.968324ms] [522.353513ms] Transaction committed
	I1024 19:15:13.209503       1 trace.go:116] Trace[1739181108]: "Update" url:/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/csr-5bf76/approval,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:certificate-controller,client:192.168.39.131 (started: 2023-10-24 19:15:12.678127895 +0000 UTC m=+23.316588013) (total time: 531.353612ms):
	Trace[1739181108]: [531.315821ms] [531.256458ms] Object stored in database
	I1024 19:15:13.201630       1 trace.go:116] Trace[1821567144]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:service-account-controller,client:192.168.39.131 (started: 2023-10-24 19:15:12.681707628 +0000 UTC m=+23.320167770) (total time: 519.896778ms):
	Trace[1821567144]: [519.878396ms] [519.848741ms] Object stored in database
	I1024 19:15:13.201747       1 trace.go:116] Trace[1087259070]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2023-10-24 19:15:12.697674912 +0000 UTC m=+23.336135050) (total time: 504.022015ms):
	Trace[1087259070]: [504.007807ms] [503.560962ms] Transaction committed
	I1024 19:15:13.209711       1 trace.go:116] Trace[25448446]: "Update" url:/api/v1/namespaces/kube-public/configmaps/cluster-info,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:bootstrap-signer,client:192.168.39.131 (started: 2023-10-24 19:15:12.697580149 +0000 UTC m=+23.336040268) (total time: 512.115736ms):
	Trace[25448446]: [512.085514ms] [512.023363ms] Object stored in database
	I1024 19:15:13.206261       1 trace.go:116] Trace[2023633164]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.18.20 (linux/amd64) kubernetes/1f3e19b,client:127.0.0.1 (started: 2023-10-24 19:15:12.691115869 +0000 UTC m=+23.329575986) (total time: 515.125134ms):
	Trace[2023633164]: [515.125134ms] [515.117328ms] END
	I1024 19:15:49.182746       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1024 19:16:14.435450       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1024 19:18:49.163714       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1024 19:18:50.226754       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [4e05be1927e82522044bbd0f41332943939987540c15b80f65977c7275ca33c8] <==
	* I1024 19:15:13.103182       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1024 19:15:13.217334       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9f0e5ae0-7713-4350-974a-634d82798595", APIVersion:"apps/v1", ResourceVersion:"205", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	E1024 19:15:13.237801       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1024 19:15:13.238548       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:15:13.238695       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1024 19:15:13.245313       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2728e4d7-51e0-4d21-a184-0592c2128075", APIVersion:"apps/v1", ResourceVersion:"307", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-xv4wm
	I1024 19:15:13.245603       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"f63c85d2-be31-479f-9be5-9a763e6fa6bf", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-ptcwq
	I1024 19:15:13.301474       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:15:13.301641       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:15:13.305306       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:15:13.339504       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2728e4d7-51e0-4d21-a184-0592c2128075", APIVersion:"apps/v1", ResourceVersion:"307", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-djszs
	E1024 19:15:13.398495       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"f63c85d2-be31-479f-9be5-9a763e6fa6bf", ResourceVersion:"212", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833771697, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a61660), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc001a61680)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a616a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a5cd80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc001a616c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a616e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a61720)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001a03a90), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a51a28), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00086a2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000fbd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001a51a78)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1024 19:15:13.453312       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"f63c85d2-be31-479f-9be5-9a763e6fa6bf", ResourceVersion:"321", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833771697, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000d826a0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc000d82700)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000d82760), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d827c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d82820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDi
skVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001b98b80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.Sca
leIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d82940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d82a60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(
nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000d82b20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContex
t)(0xc000a5ff90), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012d97e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000860460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-
critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001b9eff8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0012d9838)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1024 19:15:13.625320       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9f0e5ae0-7713-4350-974a-634d82798595", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1024 19:15:13.692308       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2728e4d7-51e0-4d21-a184-0592c2128075", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-djszs
	I1024 19:15:49.158398       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"013e5029-5840-4469-bb83-5cab6ed58a0d", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1024 19:15:49.187002       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1e8cb5f1-3841-4c43-bfa1-c0fbaaf34a2c", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9wjt9
	I1024 19:15:49.227456       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"545ca284-87d8-4813-8330-ff294fb839a1", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-xzh4w
	I1024 19:15:49.274084       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4b36c0b8-8155-4e69-b53d-fcb7d764d3d3", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-b4drr
	I1024 19:15:51.510332       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"545ca284-87d8-4813-8330-ff294fb839a1", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1024 19:15:52.518828       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4b36c0b8-8155-4e69-b53d-fcb7d764d3d3", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1024 19:18:36.640406       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"1c0c3714-2aa9-41e2-b946-2bc7eaa36771", APIVersion:"apps/v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1024 19:18:36.662290       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"181c1ee6-60e3-4ca2-b77c-6833640bda86", APIVersion:"apps/v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-xw6pj
	
	* 
	* ==> kube-proxy [735ff99c870bf4a3cbcb96f2e98e2515267570be0c88ac24f318dbfd47cd3bd9] <==
	* W1024 19:15:15.558443       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1024 19:15:15.566735       1 node.go:136] Successfully retrieved node IP: 192.168.39.131
	I1024 19:15:15.566819       1 server_others.go:186] Using iptables Proxier.
	I1024 19:15:15.569084       1 server.go:583] Version: v1.18.20
	I1024 19:15:15.571091       1 config.go:315] Starting service config controller
	I1024 19:15:15.571166       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1024 19:15:15.571256       1 config.go:133] Starting endpoints config controller
	I1024 19:15:15.571282       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1024 19:15:15.673742       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1024 19:15:15.673849       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f51691a22c33858a05d0226e216ca8067d9812d9abbbd00f0299a1a94897b319] <==
	* I1024 19:14:54.409071       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:14:54.409110       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:14:54.410755       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1024 19:14:54.412441       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:14:54.412551       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:14:54.412572       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1024 19:14:54.416175       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:14:54.417253       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:14:54.417672       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:14:54.418025       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:14:54.418776       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:14:54.419214       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:14:54.419456       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:14:54.422114       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:14:54.422509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:14:54.423316       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:14:54.423732       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:14:54.423773       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:14:55.247724       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:14:55.348475       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:14:55.357954       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:14:55.519683       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:14:55.540316       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:14:55.650846       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1024 19:14:58.812758       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:14:21 UTC, ends at Tue 2023-10-24 19:18:57 UTC. --
	Oct 24 19:15:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:15:53.741774    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd-ingress-nginx-admission-token-75m5l" (OuterVolumeSpecName: "ingress-nginx-admission-token-75m5l") pod "18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd" (UID: "18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd"). InnerVolumeSpecName "ingress-nginx-admission-token-75m5l". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:15:53 ingress-addon-legacy-845802 kubelet[1427]: W1024 19:15:53.792208    1427 pod_container_deletor.go:77] Container "8efa47b086743afc309f71f551d58af41102f03fed8b64681d6d65a5cd6b6462" not found in pod's containers
	Oct 24 19:15:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:15:53.825050    1427 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-75m5l" (UniqueName: "kubernetes.io/secret/18dcf47d-d6e5-4757-80ce-e86b8a4ab0cd-ingress-nginx-admission-token-75m5l") on node "ingress-addon-legacy-845802" DevicePath ""
	Oct 24 19:16:00 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:16:00.569715    1427 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 24 19:16:00 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:16:00.648448    1427 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-ljbs9" (UniqueName: "kubernetes.io/secret/d726b6b6-1837-4090-991f-e669f8cc80a2-minikube-ingress-dns-token-ljbs9") pod "kube-ingress-dns-minikube" (UID: "d726b6b6-1837-4090-991f-e669f8cc80a2")
	Oct 24 19:16:14 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:16:14.610699    1427 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 24 19:16:14 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:16:14.694354    1427 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-skp9g" (UniqueName: "kubernetes.io/secret/238d799a-11dc-49ea-94eb-b98d29b3ceab-default-token-skp9g") pod "nginx" (UID: "238d799a-11dc-49ea-94eb-b98d29b3ceab")
	Oct 24 19:18:36 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:36.672471    1427 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 24 19:18:36 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:36.852755    1427 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-skp9g" (UniqueName: "kubernetes.io/secret/d473442e-2879-4ce4-b145-f611fa8dd42c-default-token-skp9g") pod "hello-world-app-5f5d8b66bb-xw6pj" (UID: "d473442e-2879-4ce4-b145-f611fa8dd42c")
	Oct 24 19:18:38 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:38.633928    1427 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 951055386a91dbf8ce5cb84d3fa9ec43037475d81de5dab8084468da63e63d9c
	Oct 24 19:18:38 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:38.759738    1427 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ljbs9" (UniqueName: "kubernetes.io/secret/d726b6b6-1837-4090-991f-e669f8cc80a2-minikube-ingress-dns-token-ljbs9") pod "d726b6b6-1837-4090-991f-e669f8cc80a2" (UID: "d726b6b6-1837-4090-991f-e669f8cc80a2")
	Oct 24 19:18:38 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:38.764004    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d726b6b6-1837-4090-991f-e669f8cc80a2-minikube-ingress-dns-token-ljbs9" (OuterVolumeSpecName: "minikube-ingress-dns-token-ljbs9") pod "d726b6b6-1837-4090-991f-e669f8cc80a2" (UID: "d726b6b6-1837-4090-991f-e669f8cc80a2"). InnerVolumeSpecName "minikube-ingress-dns-token-ljbs9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:18:38 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:38.839718    1427 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 951055386a91dbf8ce5cb84d3fa9ec43037475d81de5dab8084468da63e63d9c
	Oct 24 19:18:38 ingress-addon-legacy-845802 kubelet[1427]: E1024 19:18:38.840847    1427 remote_runtime.go:295] ContainerStatus "951055386a91dbf8ce5cb84d3fa9ec43037475d81de5dab8084468da63e63d9c" from runtime service failed: rpc error: code = NotFound desc = could not find container "951055386a91dbf8ce5cb84d3fa9ec43037475d81de5dab8084468da63e63d9c": container with ID starting with 951055386a91dbf8ce5cb84d3fa9ec43037475d81de5dab8084468da63e63d9c not found: ID does not exist
	Oct 24 19:18:38 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:38.860068    1427 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ljbs9" (UniqueName: "kubernetes.io/secret/d726b6b6-1837-4090-991f-e669f8cc80a2-minikube-ingress-dns-token-ljbs9") on node "ingress-addon-legacy-845802" DevicePath ""
	Oct 24 19:18:49 ingress-addon-legacy-845802 kubelet[1427]: E1024 19:18:49.139430    1427 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9wjt9.179121485a92405c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9wjt9", UID:"14737d25-0058-48da-a733-daf3fc7f6867", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-845802"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1462406481f865c, ext:231542996770, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1462406481f865c, ext:231542996770, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9wjt9.179121485a92405c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 24 19:18:49 ingress-addon-legacy-845802 kubelet[1427]: E1024 19:18:49.167345    1427 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9wjt9.179121485a92405c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9wjt9", UID:"14737d25-0058-48da-a733-daf3fc7f6867", APIVersion:"v1", ResourceVersion:"468", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-845802"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1462406481f865c, ext:231542996770, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1462406494c7144, ext:231562717706, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9wjt9.179121485a92405c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 24 19:18:51 ingress-addon-legacy-845802 kubelet[1427]: W1024 19:18:51.687393    1427 pod_container_deletor.go:77] Container "1ae39cb5902ad44baa6d863f67a7542cf9c9da79c508d3bacf5f0f31173c96eb" not found in pod's containers
	Oct 24 19:18:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:53.307106    1427 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/14737d25-0058-48da-a733-daf3fc7f6867-webhook-cert") pod "14737d25-0058-48da-a733-daf3fc7f6867" (UID: "14737d25-0058-48da-a733-daf3fc7f6867")
	Oct 24 19:18:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:53.307153    1427 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-gdtcg" (UniqueName: "kubernetes.io/secret/14737d25-0058-48da-a733-daf3fc7f6867-ingress-nginx-token-gdtcg") pod "14737d25-0058-48da-a733-daf3fc7f6867" (UID: "14737d25-0058-48da-a733-daf3fc7f6867")
	Oct 24 19:18:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:53.309463    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14737d25-0058-48da-a733-daf3fc7f6867-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "14737d25-0058-48da-a733-daf3fc7f6867" (UID: "14737d25-0058-48da-a733-daf3fc7f6867"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:18:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:53.310341    1427 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14737d25-0058-48da-a733-daf3fc7f6867-ingress-nginx-token-gdtcg" (OuterVolumeSpecName: "ingress-nginx-token-gdtcg") pod "14737d25-0058-48da-a733-daf3fc7f6867" (UID: "14737d25-0058-48da-a733-daf3fc7f6867"). InnerVolumeSpecName "ingress-nginx-token-gdtcg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:18:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:53.407427    1427 reconciler.go:319] Volume detached for volume "ingress-nginx-token-gdtcg" (UniqueName: "kubernetes.io/secret/14737d25-0058-48da-a733-daf3fc7f6867-ingress-nginx-token-gdtcg") on node "ingress-addon-legacy-845802" DevicePath ""
	Oct 24 19:18:53 ingress-addon-legacy-845802 kubelet[1427]: I1024 19:18:53.407487    1427 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/14737d25-0058-48da-a733-daf3fc7f6867-webhook-cert") on node "ingress-addon-legacy-845802" DevicePath ""
	Oct 24 19:18:54 ingress-addon-legacy-845802 kubelet[1427]: W1024 19:18:54.229334    1427 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/14737d25-0058-48da-a733-daf3fc7f6867/volumes" does not exist
	
	* 
	* ==> storage-provisioner [43e531483bd252374e672bd94222f2a09d72d255ef82c303a19f516d737fbfb7] <==
	* I1024 19:15:15.906455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:15:15.915134       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:15:15.915201       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:15:15.925276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:15:15.926315       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-845802_29c0695e-d15e-42e5-84e1-a80424589cda!
	I1024 19:15:15.927799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7ad64ec-f3f9-41d0-8d6f-8e2cff0a8305", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-845802_29c0695e-d15e-42e5-84e1-a80424589cda became leader
	I1024 19:15:16.026943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-845802_29c0695e-d15e-42e5-84e1-a80424589cda!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-845802 -n ingress-addon-legacy-845802
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-845802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-ddcjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-ddcjz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-ddcjz -- sh -c "ping -c 1 192.168.39.1": exit status 1 (212.398464ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-ddcjz): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-wrmmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-wrmmm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-wrmmm -- sh -c "ping -c 1 192.168.39.1": exit status 1 (184.80971ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-wrmmm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-632589 -n multinode-632589
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-632589 logs -n 25: (1.311438001s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-658426 ssh -- ls                    | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-658426 ssh --                       | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-658426                           | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| start   | -p mount-start-2-658426                           | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:24 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:24 UTC |                     |
	|         | --profile mount-start-2-658426                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-658426 ssh -- ls                    | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:24 UTC | 24 Oct 23 19:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-658426 ssh --                       | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:24 UTC | 24 Oct 23 19:24 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-658426                           | mount-start-2-658426 | jenkins | v1.31.2 | 24 Oct 23 19:24 UTC | 24 Oct 23 19:24 UTC |
	| delete  | -p mount-start-1-637861                           | mount-start-1-637861 | jenkins | v1.31.2 | 24 Oct 23 19:24 UTC | 24 Oct 23 19:24 UTC |
	| start   | -p multinode-632589                               | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:24 UTC | 24 Oct 23 19:26 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- apply -f                   | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- rollout                    | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- get pods -o                | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- get pods -o                | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-ddcjz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-wrmmm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-ddcjz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-wrmmm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-ddcjz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-wrmmm -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- get pods -o                | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-ddcjz                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC |                     |
	|         | busybox-5bc68d56bd-ddcjz -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | busybox-5bc68d56bd-wrmmm                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-632589 -- exec                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC |                     |
	|         | busybox-5bc68d56bd-wrmmm -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:24:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:24:11.775235   29716 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:24:11.775484   29716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:24:11.775493   29716 out.go:309] Setting ErrFile to fd 2...
	I1024 19:24:11.775498   29716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:24:11.775648   29716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:24:11.776219   29716 out.go:303] Setting JSON to false
	I1024 19:24:11.777081   29716 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3750,"bootTime":1698171702,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:24:11.777140   29716 start.go:138] virtualization: kvm guest
	I1024 19:24:11.779416   29716 out.go:177] * [multinode-632589] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:24:11.781541   29716 notify.go:220] Checking for updates...
	I1024 19:24:11.781544   29716 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:24:11.782836   29716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:24:11.784352   29716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:24:11.786016   29716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:24:11.787607   29716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:24:11.789068   29716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:24:11.790592   29716 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:24:11.823694   29716 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 19:24:11.825073   29716 start.go:298] selected driver: kvm2
	I1024 19:24:11.825088   29716 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:24:11.825098   29716 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:24:11.825767   29716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:24:11.825831   29716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:24:11.839413   29716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:24:11.839459   29716 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:24:11.839660   29716 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:24:11.839690   29716 cni.go:84] Creating CNI manager for ""
	I1024 19:24:11.839701   29716 cni.go:136] 0 nodes found, recommending kindnet
	I1024 19:24:11.839712   29716 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:24:11.839718   29716 start_flags.go:323] config:
	{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:24:11.839840   29716 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:24:11.841552   29716 out.go:177] * Starting control plane node multinode-632589 in cluster multinode-632589
	I1024 19:24:11.842847   29716 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:24:11.842909   29716 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:24:11.842923   29716 cache.go:57] Caching tarball of preloaded images
	I1024 19:24:11.843023   29716 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:24:11.843039   29716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:24:11.843345   29716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:24:11.843373   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json: {Name:mk69367ac63b9051baa6a5f4d461954924c82a82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:11.843508   29716 start.go:365] acquiring machines lock for multinode-632589: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:24:11.843542   29716 start.go:369] acquired machines lock for "multinode-632589" in 21.988µs
	I1024 19:24:11.843564   29716 start.go:93] Provisioning new machine with config: &{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:24:11.843623   29716 start.go:125] createHost starting for "" (driver="kvm2")
	I1024 19:24:11.845403   29716 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1024 19:24:11.845517   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:24:11.845554   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:24:11.858678   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I1024 19:24:11.859070   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:24:11.859557   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:24:11.859580   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:24:11.859864   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:24:11.860023   29716 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:24:11.860172   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:11.860313   29716 start.go:159] libmachine.API.Create for "multinode-632589" (driver="kvm2")
	I1024 19:24:11.860342   29716 client.go:168] LocalClient.Create starting
	I1024 19:24:11.860378   29716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem
	I1024 19:24:11.860412   29716 main.go:141] libmachine: Decoding PEM data...
	I1024 19:24:11.860432   29716 main.go:141] libmachine: Parsing certificate...
	I1024 19:24:11.860478   29716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem
	I1024 19:24:11.860496   29716 main.go:141] libmachine: Decoding PEM data...
	I1024 19:24:11.860507   29716 main.go:141] libmachine: Parsing certificate...
	I1024 19:24:11.860532   29716 main.go:141] libmachine: Running pre-create checks...
	I1024 19:24:11.860541   29716 main.go:141] libmachine: (multinode-632589) Calling .PreCreateCheck
	I1024 19:24:11.860920   29716 main.go:141] libmachine: (multinode-632589) Calling .GetConfigRaw
	I1024 19:24:11.861253   29716 main.go:141] libmachine: Creating machine...
	I1024 19:24:11.861267   29716 main.go:141] libmachine: (multinode-632589) Calling .Create
	I1024 19:24:11.861388   29716 main.go:141] libmachine: (multinode-632589) Creating KVM machine...
	I1024 19:24:11.862632   29716 main.go:141] libmachine: (multinode-632589) DBG | found existing default KVM network
	I1024 19:24:11.863249   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:11.863126   29738 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1024 19:24:11.867958   29716 main.go:141] libmachine: (multinode-632589) DBG | trying to create private KVM network mk-multinode-632589 192.168.39.0/24...
	I1024 19:24:11.933884   29716 main.go:141] libmachine: (multinode-632589) DBG | private KVM network mk-multinode-632589 192.168.39.0/24 created
	I1024 19:24:11.933936   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:11.933778   29738 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:24:11.933960   29716 main.go:141] libmachine: (multinode-632589) Setting up store path in /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589 ...
	I1024 19:24:11.933989   29716 main.go:141] libmachine: (multinode-632589) Building disk image from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:24:11.934027   29716 main.go:141] libmachine: (multinode-632589) Downloading /home/jenkins/minikube-integration/17485-9023/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso...
	I1024 19:24:12.138774   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:12.138652   29738 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa...
	I1024 19:24:12.223507   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:12.223371   29738 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/multinode-632589.rawdisk...
	I1024 19:24:12.223540   29716 main.go:141] libmachine: (multinode-632589) DBG | Writing magic tar header
	I1024 19:24:12.223558   29716 main.go:141] libmachine: (multinode-632589) DBG | Writing SSH key tar header
	I1024 19:24:12.223578   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:12.223479   29738 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589 ...
	I1024 19:24:12.223608   29716 main.go:141] libmachine: (multinode-632589) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589 (perms=drwx------)
	I1024 19:24:12.223654   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589
	I1024 19:24:12.223674   29716 main.go:141] libmachine: (multinode-632589) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines (perms=drwxr-xr-x)
	I1024 19:24:12.223690   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines
	I1024 19:24:12.223702   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:24:12.223711   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023
	I1024 19:24:12.223719   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1024 19:24:12.223731   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home/jenkins
	I1024 19:24:12.223743   29716 main.go:141] libmachine: (multinode-632589) DBG | Checking permissions on dir: /home
	I1024 19:24:12.223756   29716 main.go:141] libmachine: (multinode-632589) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube (perms=drwxr-xr-x)
	I1024 19:24:12.223771   29716 main.go:141] libmachine: (multinode-632589) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023 (perms=drwxrwxr-x)
	I1024 19:24:12.223781   29716 main.go:141] libmachine: (multinode-632589) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1024 19:24:12.223792   29716 main.go:141] libmachine: (multinode-632589) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1024 19:24:12.223804   29716 main.go:141] libmachine: (multinode-632589) Creating domain...
	I1024 19:24:12.223822   29716 main.go:141] libmachine: (multinode-632589) DBG | Skipping /home - not owner
	I1024 19:24:12.224893   29716 main.go:141] libmachine: (multinode-632589) define libvirt domain using xml: 
	I1024 19:24:12.224913   29716 main.go:141] libmachine: (multinode-632589) <domain type='kvm'>
	I1024 19:24:12.224924   29716 main.go:141] libmachine: (multinode-632589)   <name>multinode-632589</name>
	I1024 19:24:12.224943   29716 main.go:141] libmachine: (multinode-632589)   <memory unit='MiB'>2200</memory>
	I1024 19:24:12.224956   29716 main.go:141] libmachine: (multinode-632589)   <vcpu>2</vcpu>
	I1024 19:24:12.224971   29716 main.go:141] libmachine: (multinode-632589)   <features>
	I1024 19:24:12.224980   29716 main.go:141] libmachine: (multinode-632589)     <acpi/>
	I1024 19:24:12.224985   29716 main.go:141] libmachine: (multinode-632589)     <apic/>
	I1024 19:24:12.225005   29716 main.go:141] libmachine: (multinode-632589)     <pae/>
	I1024 19:24:12.225021   29716 main.go:141] libmachine: (multinode-632589)     
	I1024 19:24:12.225035   29716 main.go:141] libmachine: (multinode-632589)   </features>
	I1024 19:24:12.225043   29716 main.go:141] libmachine: (multinode-632589)   <cpu mode='host-passthrough'>
	I1024 19:24:12.225052   29716 main.go:141] libmachine: (multinode-632589)   
	I1024 19:24:12.225057   29716 main.go:141] libmachine: (multinode-632589)   </cpu>
	I1024 19:24:12.225063   29716 main.go:141] libmachine: (multinode-632589)   <os>
	I1024 19:24:12.225077   29716 main.go:141] libmachine: (multinode-632589)     <type>hvm</type>
	I1024 19:24:12.225091   29716 main.go:141] libmachine: (multinode-632589)     <boot dev='cdrom'/>
	I1024 19:24:12.225110   29716 main.go:141] libmachine: (multinode-632589)     <boot dev='hd'/>
	I1024 19:24:12.225125   29716 main.go:141] libmachine: (multinode-632589)     <bootmenu enable='no'/>
	I1024 19:24:12.225135   29716 main.go:141] libmachine: (multinode-632589)   </os>
	I1024 19:24:12.225142   29716 main.go:141] libmachine: (multinode-632589)   <devices>
	I1024 19:24:12.225148   29716 main.go:141] libmachine: (multinode-632589)     <disk type='file' device='cdrom'>
	I1024 19:24:12.225157   29716 main.go:141] libmachine: (multinode-632589)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/boot2docker.iso'/>
	I1024 19:24:12.225169   29716 main.go:141] libmachine: (multinode-632589)       <target dev='hdc' bus='scsi'/>
	I1024 19:24:12.225192   29716 main.go:141] libmachine: (multinode-632589)       <readonly/>
	I1024 19:24:12.225205   29716 main.go:141] libmachine: (multinode-632589)     </disk>
	I1024 19:24:12.225213   29716 main.go:141] libmachine: (multinode-632589)     <disk type='file' device='disk'>
	I1024 19:24:12.225233   29716 main.go:141] libmachine: (multinode-632589)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1024 19:24:12.225248   29716 main.go:141] libmachine: (multinode-632589)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/multinode-632589.rawdisk'/>
	I1024 19:24:12.225257   29716 main.go:141] libmachine: (multinode-632589)       <target dev='hda' bus='virtio'/>
	I1024 19:24:12.225263   29716 main.go:141] libmachine: (multinode-632589)     </disk>
	I1024 19:24:12.225271   29716 main.go:141] libmachine: (multinode-632589)     <interface type='network'>
	I1024 19:24:12.225279   29716 main.go:141] libmachine: (multinode-632589)       <source network='mk-multinode-632589'/>
	I1024 19:24:12.225287   29716 main.go:141] libmachine: (multinode-632589)       <model type='virtio'/>
	I1024 19:24:12.225312   29716 main.go:141] libmachine: (multinode-632589)     </interface>
	I1024 19:24:12.225331   29716 main.go:141] libmachine: (multinode-632589)     <interface type='network'>
	I1024 19:24:12.225342   29716 main.go:141] libmachine: (multinode-632589)       <source network='default'/>
	I1024 19:24:12.225349   29716 main.go:141] libmachine: (multinode-632589)       <model type='virtio'/>
	I1024 19:24:12.225356   29716 main.go:141] libmachine: (multinode-632589)     </interface>
	I1024 19:24:12.225362   29716 main.go:141] libmachine: (multinode-632589)     <serial type='pty'>
	I1024 19:24:12.225368   29716 main.go:141] libmachine: (multinode-632589)       <target port='0'/>
	I1024 19:24:12.225379   29716 main.go:141] libmachine: (multinode-632589)     </serial>
	I1024 19:24:12.225385   29716 main.go:141] libmachine: (multinode-632589)     <console type='pty'>
	I1024 19:24:12.225393   29716 main.go:141] libmachine: (multinode-632589)       <target type='serial' port='0'/>
	I1024 19:24:12.225401   29716 main.go:141] libmachine: (multinode-632589)     </console>
	I1024 19:24:12.225406   29716 main.go:141] libmachine: (multinode-632589)     <rng model='virtio'>
	I1024 19:24:12.225442   29716 main.go:141] libmachine: (multinode-632589)       <backend model='random'>/dev/random</backend>
	I1024 19:24:12.225469   29716 main.go:141] libmachine: (multinode-632589)     </rng>
	I1024 19:24:12.225488   29716 main.go:141] libmachine: (multinode-632589)     
	I1024 19:24:12.225505   29716 main.go:141] libmachine: (multinode-632589)     
	I1024 19:24:12.225520   29716 main.go:141] libmachine: (multinode-632589)   </devices>
	I1024 19:24:12.225532   29716 main.go:141] libmachine: (multinode-632589) </domain>
	I1024 19:24:12.225546   29716 main.go:141] libmachine: (multinode-632589) 
	I1024 19:24:12.229731   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:02:d3:83 in network default
	I1024 19:24:12.230273   29716 main.go:141] libmachine: (multinode-632589) Ensuring networks are active...
	I1024 19:24:12.230300   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:12.230942   29716 main.go:141] libmachine: (multinode-632589) Ensuring network default is active
	I1024 19:24:12.231247   29716 main.go:141] libmachine: (multinode-632589) Ensuring network mk-multinode-632589 is active
	I1024 19:24:12.231693   29716 main.go:141] libmachine: (multinode-632589) Getting domain xml...
	I1024 19:24:12.232386   29716 main.go:141] libmachine: (multinode-632589) Creating domain...
	I1024 19:24:13.434738   29716 main.go:141] libmachine: (multinode-632589) Waiting to get IP...
	I1024 19:24:13.435564   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:13.435904   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:13.435962   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:13.435891   29738 retry.go:31] will retry after 300.931987ms: waiting for machine to come up
	I1024 19:24:13.738447   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:13.738852   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:13.738882   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:13.738800   29738 retry.go:31] will retry after 248.026359ms: waiting for machine to come up
	I1024 19:24:13.988115   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:13.988474   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:13.988505   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:13.988419   29738 retry.go:31] will retry after 442.60356ms: waiting for machine to come up
	I1024 19:24:14.433013   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:14.433434   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:14.433468   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:14.433370   29738 retry.go:31] will retry after 458.292999ms: waiting for machine to come up
	I1024 19:24:14.892858   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:14.893287   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:14.893326   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:14.893245   29738 retry.go:31] will retry after 731.18663ms: waiting for machine to come up
	I1024 19:24:15.626230   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:15.626713   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:15.626738   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:15.626685   29738 retry.go:31] will retry after 914.348076ms: waiting for machine to come up
	I1024 19:24:16.542894   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:16.543790   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:16.543855   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:16.543283   29738 retry.go:31] will retry after 951.637355ms: waiting for machine to come up
	I1024 19:24:17.496994   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:17.497385   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:17.497410   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:17.497362   29738 retry.go:31] will retry after 1.15575216s: waiting for machine to come up
	I1024 19:24:18.654397   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:18.654753   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:18.654781   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:18.654697   29738 retry.go:31] will retry after 1.231185632s: waiting for machine to come up
	I1024 19:24:19.888180   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:19.888619   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:19.888641   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:19.888587   29738 retry.go:31] will retry after 2.138172276s: waiting for machine to come up
	I1024 19:24:22.028839   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:22.029216   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:22.029250   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:22.029153   29738 retry.go:31] will retry after 2.08010907s: waiting for machine to come up
	I1024 19:24:24.110262   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:24.110635   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:24.110657   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:24.110600   29738 retry.go:31] will retry after 3.418454674s: waiting for machine to come up
	I1024 19:24:27.530638   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:27.531071   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:27.531101   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:27.531032   29738 retry.go:31] will retry after 4.183905084s: waiting for machine to come up
	I1024 19:24:31.719209   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:31.719582   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:24:31.719610   29716 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:24:31.719542   29738 retry.go:31] will retry after 3.599301363s: waiting for machine to come up
	I1024 19:24:35.322921   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.323324   29716 main.go:141] libmachine: (multinode-632589) Found IP for machine: 192.168.39.247
	I1024 19:24:35.323344   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has current primary IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.323352   29716 main.go:141] libmachine: (multinode-632589) Reserving static IP address...
	I1024 19:24:35.323763   29716 main.go:141] libmachine: (multinode-632589) DBG | unable to find host DHCP lease matching {name: "multinode-632589", mac: "52:54:00:9a:c3:34", ip: "192.168.39.247"} in network mk-multinode-632589
	I1024 19:24:35.391785   29716 main.go:141] libmachine: (multinode-632589) DBG | Getting to WaitForSSH function...
	I1024 19:24:35.391813   29716 main.go:141] libmachine: (multinode-632589) Reserved static IP address: 192.168.39.247
	I1024 19:24:35.391852   29716 main.go:141] libmachine: (multinode-632589) Waiting for SSH to be available...
	I1024 19:24:35.394596   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.395058   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:35.395107   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.395210   29716 main.go:141] libmachine: (multinode-632589) DBG | Using SSH client type: external
	I1024 19:24:35.395238   29716 main.go:141] libmachine: (multinode-632589) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa (-rw-------)
	I1024 19:24:35.395275   29716 main.go:141] libmachine: (multinode-632589) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:24:35.395291   29716 main.go:141] libmachine: (multinode-632589) DBG | About to run SSH command:
	I1024 19:24:35.395306   29716 main.go:141] libmachine: (multinode-632589) DBG | exit 0
	I1024 19:24:35.488589   29716 main.go:141] libmachine: (multinode-632589) DBG | SSH cmd err, output: <nil>: 
	I1024 19:24:35.488841   29716 main.go:141] libmachine: (multinode-632589) KVM machine creation complete!
	I1024 19:24:35.489133   29716 main.go:141] libmachine: (multinode-632589) Calling .GetConfigRaw
	I1024 19:24:35.489693   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:35.489923   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:35.490097   29716 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1024 19:24:35.490116   29716 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:24:35.491286   29716 main.go:141] libmachine: Detecting operating system of created instance...
	I1024 19:24:35.491300   29716 main.go:141] libmachine: Waiting for SSH to be available...
	I1024 19:24:35.491306   29716 main.go:141] libmachine: Getting to WaitForSSH function...
	I1024 19:24:35.491312   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:35.493353   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.493702   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:35.493733   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.493844   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:35.493968   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.494119   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.494219   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:35.494331   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:35.494744   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:24:35.494759   29716 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1024 19:24:35.620303   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:24:35.620324   29716 main.go:141] libmachine: Detecting the provisioner...
	I1024 19:24:35.620332   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:35.622899   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.623254   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:35.623275   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.623436   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:35.623622   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.623791   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.623952   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:35.624093   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:35.624428   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:24:35.624441   29716 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1024 19:24:35.750063   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g71212f5-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1024 19:24:35.750151   29716 main.go:141] libmachine: found compatible host: buildroot
	I1024 19:24:35.750163   29716 main.go:141] libmachine: Provisioning with buildroot...
	I1024 19:24:35.750171   29716 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:24:35.750445   29716 buildroot.go:166] provisioning hostname "multinode-632589"
	I1024 19:24:35.750470   29716 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:24:35.750667   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:35.754153   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.754569   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:35.754609   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.754747   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:35.754939   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.755117   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.755285   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:35.755454   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:35.755763   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:24:35.755776   29716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-632589 && echo "multinode-632589" | sudo tee /etc/hostname
	I1024 19:24:35.894202   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-632589
	
	I1024 19:24:35.894228   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:35.896749   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.897039   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:35.897083   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:35.897198   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:35.897368   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.897523   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:35.897664   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:35.897829   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:35.898132   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:24:35.898159   29716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-632589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-632589/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-632589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:24:36.032724   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:24:36.032764   29716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:24:36.032804   29716 buildroot.go:174] setting up certificates
	I1024 19:24:36.032816   29716 provision.go:83] configureAuth start
	I1024 19:24:36.032834   29716 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:24:36.033094   29716 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:24:36.035761   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.036143   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.036171   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.036260   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:36.038289   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.038627   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.038657   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.038804   29716 provision.go:138] copyHostCerts
	I1024 19:24:36.038837   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:24:36.038876   29716 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:24:36.038896   29716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:24:36.038963   29716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:24:36.039052   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:24:36.039077   29716 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:24:36.039086   29716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:24:36.039111   29716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:24:36.039170   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:24:36.039194   29716 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:24:36.039198   29716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:24:36.039222   29716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:24:36.039283   29716 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.multinode-632589 san=[192.168.39.247 192.168.39.247 localhost 127.0.0.1 minikube multinode-632589]
	I1024 19:24:36.284948   29716 provision.go:172] copyRemoteCerts
	I1024 19:24:36.285013   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:24:36.285053   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:36.287673   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.288022   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.288053   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.288181   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:36.288365   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:36.288520   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:36.288648   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:24:36.382672   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:24:36.382730   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1024 19:24:36.404495   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:24:36.404553   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:24:36.426139   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:24:36.426194   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:24:36.446639   29716 provision.go:86] duration metric: configureAuth took 413.810488ms
	I1024 19:24:36.446656   29716 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:24:36.446796   29716 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:36.446859   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:36.449374   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.449719   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.449763   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.449904   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:36.450098   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:36.450256   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:36.450383   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:36.450534   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:36.450890   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:24:36.450912   29716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:24:36.771583   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:24:36.771610   29716 main.go:141] libmachine: Checking connection to Docker...
	I1024 19:24:36.771620   29716 main.go:141] libmachine: (multinode-632589) Calling .GetURL
	I1024 19:24:36.773017   29716 main.go:141] libmachine: (multinode-632589) DBG | Using libvirt version 6000000
	I1024 19:24:36.774944   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.775209   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.775237   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.775384   29716 main.go:141] libmachine: Docker is up and running!
	I1024 19:24:36.777727   29716 main.go:141] libmachine: Reticulating splines...
	I1024 19:24:36.777749   29716 client.go:171] LocalClient.Create took 24.91738524s
	I1024 19:24:36.777783   29716 start.go:167] duration metric: libmachine.API.Create for "multinode-632589" took 24.917471473s
	I1024 19:24:36.777797   29716 start.go:300] post-start starting for "multinode-632589" (driver="kvm2")
	I1024 19:24:36.777810   29716 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:24:36.777841   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:36.778108   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:24:36.778132   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:36.780639   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.780928   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.780977   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.781089   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:36.781267   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:36.781426   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:36.781563   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:24:36.874302   29716 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:24:36.878968   29716 command_runner.go:130] > NAME=Buildroot
	I1024 19:24:36.878991   29716 command_runner.go:130] > VERSION=2021.02.12-1-g71212f5-dirty
	I1024 19:24:36.878996   29716 command_runner.go:130] > ID=buildroot
	I1024 19:24:36.879001   29716 command_runner.go:130] > VERSION_ID=2021.02.12
	I1024 19:24:36.879005   29716 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1024 19:24:36.879035   29716 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:24:36.879045   29716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:24:36.879112   29716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:24:36.879228   29716 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:24:36.879240   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /etc/ssl/certs/162982.pem
	I1024 19:24:36.879320   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:24:36.887719   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:24:36.910825   29716 start.go:303] post-start completed in 133.015059ms
	I1024 19:24:36.910871   29716 main.go:141] libmachine: (multinode-632589) Calling .GetConfigRaw
	I1024 19:24:36.911463   29716 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:24:36.914216   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.914580   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.914617   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.914846   29716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:24:36.915067   29716 start.go:128] duration metric: createHost completed in 25.071435604s
	I1024 19:24:36.915089   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:36.917054   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.917332   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:36.917362   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:36.917455   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:36.917626   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:36.917790   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:36.917951   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:36.918147   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:36.918454   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:24:36.918464   29716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:24:37.046166   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698175477.017542394
	
	I1024 19:24:37.046182   29716 fix.go:206] guest clock: 1698175477.017542394
	I1024 19:24:37.046189   29716 fix.go:219] Guest: 2023-10-24 19:24:37.017542394 +0000 UTC Remote: 2023-10-24 19:24:36.915079685 +0000 UTC m=+25.186532837 (delta=102.462709ms)
	I1024 19:24:37.046206   29716 fix.go:190] guest clock delta is within tolerance: 102.462709ms
	I1024 19:24:37.046211   29716 start.go:83] releasing machines lock for "multinode-632589", held for 25.202659402s
	I1024 19:24:37.046227   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:37.046495   29716 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:24:37.049105   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:37.049475   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:37.049500   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:37.049647   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:37.050142   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:37.050302   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:24:37.050387   29716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:24:37.050436   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:37.050517   29716 ssh_runner.go:195] Run: cat /version.json
	I1024 19:24:37.050544   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:24:37.053170   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:37.053209   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:37.053609   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:37.053640   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:37.053684   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:37.053714   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:37.053812   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:37.053936   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:24:37.054002   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:37.054103   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:24:37.054162   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:37.054240   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:24:37.054336   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:24:37.054475   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:24:37.163767   29716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:24:37.163814   29716 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1697451950-17434", "minikube_version": "v1.31.2", "commit": "141089eac34bd516aedd7845aa4003657eadd19b"}
	I1024 19:24:37.163922   29716 ssh_runner.go:195] Run: systemctl --version
	I1024 19:24:37.169749   29716 command_runner.go:130] > systemd 247 (247)
	I1024 19:24:37.169781   29716 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1024 19:24:37.169849   29716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:24:37.326360   29716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:24:37.332185   29716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1024 19:24:37.332378   29716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:24:37.332449   29716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:24:37.346483   29716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1024 19:24:37.346525   29716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:24:37.346532   29716 start.go:472] detecting cgroup driver to use...
	I1024 19:24:37.346584   29716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:24:37.359095   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:24:37.370859   29716 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:24:37.370918   29716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:24:37.383133   29716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:24:37.395749   29716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:24:37.495562   29716 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1024 19:24:37.495633   29716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:24:37.508137   29716 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 19:24:37.610395   29716 docker.go:214] disabling docker service ...
	I1024 19:24:37.610479   29716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:24:37.622893   29716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:24:37.633324   29716 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1024 19:24:37.634238   29716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:24:37.647189   29716 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 19:24:37.738461   29716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:24:37.751181   29716 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1024 19:24:37.751602   29716 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 19:24:37.843871   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:24:37.855823   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:24:37.872371   29716 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:24:37.872414   29716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:24:37.872469   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:37.881159   29716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:24:37.881207   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:37.890176   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:37.898840   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:37.907746   29716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:24:37.916976   29716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:24:37.924619   29716 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:24:37.924653   29716 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:24:37.924694   29716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:24:37.936771   29716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:24:37.945017   29716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:24:38.046344   29716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:24:38.214708   29716 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:24:38.214792   29716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:24:38.219539   29716 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:24:38.219564   29716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:24:38.219574   29716 command_runner.go:130] > Device: 16h/22d	Inode: 714         Links: 1
	I1024 19:24:38.219583   29716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:24:38.219591   29716 command_runner.go:130] > Access: 2023-10-24 19:24:38.172878824 +0000
	I1024 19:24:38.219600   29716 command_runner.go:130] > Modify: 2023-10-24 19:24:38.172878824 +0000
	I1024 19:24:38.219617   29716 command_runner.go:130] > Change: 2023-10-24 19:24:38.172878824 +0000
	I1024 19:24:38.219623   29716 command_runner.go:130] >  Birth: -
	I1024 19:24:38.219643   29716 start.go:540] Will wait 60s for crictl version
	I1024 19:24:38.219686   29716 ssh_runner.go:195] Run: which crictl
	I1024 19:24:38.223185   29716 command_runner.go:130] > /usr/bin/crictl
	I1024 19:24:38.223321   29716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:24:38.261993   29716 command_runner.go:130] > Version:  0.1.0
	I1024 19:24:38.262013   29716 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:24:38.262021   29716 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1024 19:24:38.262029   29716 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:24:38.262297   29716 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:24:38.262367   29716 ssh_runner.go:195] Run: crio --version
	I1024 19:24:38.302604   29716 command_runner.go:130] > crio version 1.24.1
	I1024 19:24:38.302622   29716 command_runner.go:130] > Version:          1.24.1
	I1024 19:24:38.302633   29716 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:24:38.302641   29716 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:24:38.302652   29716 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:24:38.302659   29716 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:24:38.302669   29716 command_runner.go:130] > Compiler:         gc
	I1024 19:24:38.302677   29716 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:24:38.302686   29716 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:24:38.302701   29716 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:24:38.302712   29716 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:24:38.302720   29716 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:24:38.303905   29716 ssh_runner.go:195] Run: crio --version
	I1024 19:24:38.345964   29716 command_runner.go:130] > crio version 1.24.1
	I1024 19:24:38.345996   29716 command_runner.go:130] > Version:          1.24.1
	I1024 19:24:38.346008   29716 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:24:38.346015   29716 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:24:38.346021   29716 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:24:38.346026   29716 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:24:38.346030   29716 command_runner.go:130] > Compiler:         gc
	I1024 19:24:38.346034   29716 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:24:38.346040   29716 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:24:38.346050   29716 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:24:38.346060   29716 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:24:38.346069   29716 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:24:38.348028   29716 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:24:38.349366   29716 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:24:38.351906   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:38.352255   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:24:38.352284   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:24:38.352473   29716 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:24:38.356327   29716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:24:38.367561   29716 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:24:38.367618   29716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:24:38.400163   29716 command_runner.go:130] > {
	I1024 19:24:38.400184   29716 command_runner.go:130] >   "images": [
	I1024 19:24:38.400189   29716 command_runner.go:130] >   ]
	I1024 19:24:38.400193   29716 command_runner.go:130] > }
	I1024 19:24:38.400559   29716 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 19:24:38.400632   29716 ssh_runner.go:195] Run: which lz4
	I1024 19:24:38.404454   29716 command_runner.go:130] > /usr/bin/lz4
	I1024 19:24:38.404484   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1024 19:24:38.404575   29716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:24:38.408414   29716 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:24:38.408621   29716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:24:38.408648   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 19:24:40.194324   29716 crio.go:444] Took 1.789774 seconds to copy over tarball
	I1024 19:24:40.194381   29716 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:24:43.118912   29716 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.924501865s)
	I1024 19:24:43.118941   29716 crio.go:451] Took 2.924596 seconds to extract the tarball
	I1024 19:24:43.118950   29716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:24:43.160054   29716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:24:43.228474   29716 command_runner.go:130] > {
	I1024 19:24:43.228498   29716 command_runner.go:130] >   "images": [
	I1024 19:24:43.228502   29716 command_runner.go:130] >     {
	I1024 19:24:43.228510   29716 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1024 19:24:43.228519   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.228532   29716 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1024 19:24:43.228538   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228545   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.228559   29716 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1024 19:24:43.228580   29716 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1024 19:24:43.228584   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228588   29716 command_runner.go:130] >       "size": "65258016",
	I1024 19:24:43.228593   29716 command_runner.go:130] >       "uid": null,
	I1024 19:24:43.228597   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.228603   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.228607   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.228612   29716 command_runner.go:130] >     },
	I1024 19:24:43.228615   29716 command_runner.go:130] >     {
	I1024 19:24:43.228623   29716 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1024 19:24:43.228633   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.228642   29716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1024 19:24:43.228652   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228659   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.228678   29716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1024 19:24:43.228691   29716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1024 19:24:43.228697   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228705   29716 command_runner.go:130] >       "size": "31470524",
	I1024 19:24:43.228715   29716 command_runner.go:130] >       "uid": null,
	I1024 19:24:43.228726   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.228737   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.228747   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.228757   29716 command_runner.go:130] >     },
	I1024 19:24:43.228766   29716 command_runner.go:130] >     {
	I1024 19:24:43.228779   29716 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1024 19:24:43.228786   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.228793   29716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1024 19:24:43.228803   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228814   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.228829   29716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1024 19:24:43.228844   29716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1024 19:24:43.228854   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228867   29716 command_runner.go:130] >       "size": "53621675",
	I1024 19:24:43.228874   29716 command_runner.go:130] >       "uid": null,
	I1024 19:24:43.228880   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.228891   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.228902   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.228911   29716 command_runner.go:130] >     },
	I1024 19:24:43.228920   29716 command_runner.go:130] >     {
	I1024 19:24:43.228933   29716 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1024 19:24:43.228943   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.228951   29716 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1024 19:24:43.228957   29716 command_runner.go:130] >       ],
	I1024 19:24:43.228964   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.228980   29716 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1024 19:24:43.228995   29716 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1024 19:24:43.229011   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229022   29716 command_runner.go:130] >       "size": "295456551",
	I1024 19:24:43.229031   29716 command_runner.go:130] >       "uid": {
	I1024 19:24:43.229039   29716 command_runner.go:130] >         "value": "0"
	I1024 19:24:43.229045   29716 command_runner.go:130] >       },
	I1024 19:24:43.229055   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.229066   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.229076   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.229085   29716 command_runner.go:130] >     },
	I1024 19:24:43.229094   29716 command_runner.go:130] >     {
	I1024 19:24:43.229107   29716 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1024 19:24:43.229117   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.229125   29716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1024 19:24:43.229134   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229145   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.229161   29716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1024 19:24:43.229176   29716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1024 19:24:43.229185   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229196   29716 command_runner.go:130] >       "size": "127165392",
	I1024 19:24:43.229205   29716 command_runner.go:130] >       "uid": {
	I1024 19:24:43.229211   29716 command_runner.go:130] >         "value": "0"
	I1024 19:24:43.229216   29716 command_runner.go:130] >       },
	I1024 19:24:43.229234   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.229245   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.229255   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.229264   29716 command_runner.go:130] >     },
	I1024 19:24:43.229273   29716 command_runner.go:130] >     {
	I1024 19:24:43.229285   29716 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1024 19:24:43.229292   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.229315   29716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1024 19:24:43.229325   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229334   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.229353   29716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1024 19:24:43.229369   29716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1024 19:24:43.229378   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229389   29716 command_runner.go:130] >       "size": "123188534",
	I1024 19:24:43.229400   29716 command_runner.go:130] >       "uid": {
	I1024 19:24:43.229410   29716 command_runner.go:130] >         "value": "0"
	I1024 19:24:43.229417   29716 command_runner.go:130] >       },
	I1024 19:24:43.229427   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.229439   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.229448   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.229452   29716 command_runner.go:130] >     },
	I1024 19:24:43.229458   29716 command_runner.go:130] >     {
	I1024 19:24:43.229468   29716 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1024 19:24:43.229480   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.229498   29716 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1024 19:24:43.229507   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229514   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.229529   29716 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1024 19:24:43.229541   29716 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1024 19:24:43.229549   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229560   29716 command_runner.go:130] >       "size": "74691991",
	I1024 19:24:43.229570   29716 command_runner.go:130] >       "uid": null,
	I1024 19:24:43.229580   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.229590   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.229600   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.229609   29716 command_runner.go:130] >     },
	I1024 19:24:43.229618   29716 command_runner.go:130] >     {
	I1024 19:24:43.229628   29716 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1024 19:24:43.229637   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.229650   29716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1024 19:24:43.229660   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229667   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.229718   29716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1024 19:24:43.229736   29716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1024 19:24:43.229743   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229754   29716 command_runner.go:130] >       "size": "61498678",
	I1024 19:24:43.229764   29716 command_runner.go:130] >       "uid": {
	I1024 19:24:43.229772   29716 command_runner.go:130] >         "value": "0"
	I1024 19:24:43.229781   29716 command_runner.go:130] >       },
	I1024 19:24:43.229788   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.229794   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.229798   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.229803   29716 command_runner.go:130] >     },
	I1024 19:24:43.229814   29716 command_runner.go:130] >     {
	I1024 19:24:43.229827   29716 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1024 19:24:43.229838   29716 command_runner.go:130] >       "repoTags": [
	I1024 19:24:43.229846   29716 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 19:24:43.229855   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229866   29716 command_runner.go:130] >       "repoDigests": [
	I1024 19:24:43.229878   29716 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1024 19:24:43.229889   29716 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1024 19:24:43.229899   29716 command_runner.go:130] >       ],
	I1024 19:24:43.229907   29716 command_runner.go:130] >       "size": "750414",
	I1024 19:24:43.229917   29716 command_runner.go:130] >       "uid": {
	I1024 19:24:43.229929   29716 command_runner.go:130] >         "value": "65535"
	I1024 19:24:43.229938   29716 command_runner.go:130] >       },
	I1024 19:24:43.229945   29716 command_runner.go:130] >       "username": "",
	I1024 19:24:43.229955   29716 command_runner.go:130] >       "spec": null,
	I1024 19:24:43.229966   29716 command_runner.go:130] >       "pinned": false
	I1024 19:24:43.229975   29716 command_runner.go:130] >     }
	I1024 19:24:43.229980   29716 command_runner.go:130] >   ]
	I1024 19:24:43.229987   29716 command_runner.go:130] > }
	I1024 19:24:43.230154   29716 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:24:43.230171   29716 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:24:43.230237   29716 ssh_runner.go:195] Run: crio config
	I1024 19:24:43.280351   29716 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:24:43.280383   29716 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:24:43.280395   29716 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:24:43.280412   29716 command_runner.go:130] > #
	I1024 19:24:43.280420   29716 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:24:43.280430   29716 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:24:43.280436   29716 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:24:43.280452   29716 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:24:43.280456   29716 command_runner.go:130] > # reload'.
	I1024 19:24:43.280462   29716 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:24:43.280474   29716 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:24:43.280499   29716 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:24:43.280512   29716 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:24:43.280515   29716 command_runner.go:130] > [crio]
	I1024 19:24:43.280522   29716 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:24:43.280529   29716 command_runner.go:130] > # containers images, in this directory.
	I1024 19:24:43.280678   29716 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1024 19:24:43.280696   29716 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:24:43.281006   29716 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1024 19:24:43.281022   29716 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:24:43.281033   29716 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:24:43.281277   29716 command_runner.go:130] > storage_driver = "overlay"
	I1024 19:24:43.281292   29716 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:24:43.281312   29716 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:24:43.281324   29716 command_runner.go:130] > storage_option = [
	I1024 19:24:43.281735   29716 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1024 19:24:43.281790   29716 command_runner.go:130] > ]
	I1024 19:24:43.281808   29716 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:24:43.281815   29716 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:24:43.282211   29716 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:24:43.282225   29716 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:24:43.282236   29716 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:24:43.282244   29716 command_runner.go:130] > # always happen on a node reboot
	I1024 19:24:43.282705   29716 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:24:43.282720   29716 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:24:43.282731   29716 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:24:43.282752   29716 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:24:43.283281   29716 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:24:43.283300   29716 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:24:43.283314   29716 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:24:43.283323   29716 command_runner.go:130] > # internal_wipe = true
	I1024 19:24:43.283333   29716 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:24:43.283352   29716 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:24:43.283365   29716 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:24:43.283370   29716 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:24:43.283376   29716 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:24:43.283383   29716 command_runner.go:130] > [crio.api]
	I1024 19:24:43.283391   29716 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:24:43.283397   29716 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:24:43.283408   29716 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:24:43.283416   29716 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:24:43.283428   29716 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:24:43.283440   29716 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:24:43.283450   29716 command_runner.go:130] > # stream_port = "0"
	I1024 19:24:43.283462   29716 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:24:43.283472   29716 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:24:43.283480   29716 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:24:43.283485   29716 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:24:43.283495   29716 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:24:43.283505   29716 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:24:43.283518   29716 command_runner.go:130] > # minutes.
	I1024 19:24:43.283526   29716 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:24:43.283539   29716 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:24:43.283552   29716 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:24:43.283563   29716 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:24:43.283575   29716 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:24:43.283588   29716 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:24:43.283601   29716 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:24:43.283610   29716 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:24:43.283622   29716 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:24:43.283634   29716 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1024 19:24:43.283658   29716 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:24:43.283669   29716 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1024 19:24:43.283697   29716 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:24:43.283711   29716 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:24:43.283718   29716 command_runner.go:130] > [crio.runtime]
	I1024 19:24:43.283728   29716 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:24:43.283740   29716 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:24:43.283754   29716 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:24:43.283764   29716 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:24:43.283772   29716 command_runner.go:130] > # default_ulimits = [
	I1024 19:24:43.283776   29716 command_runner.go:130] > # ]
	I1024 19:24:43.283784   29716 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:24:43.283788   29716 command_runner.go:130] > # no_pivot = false
	I1024 19:24:43.283798   29716 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:24:43.283812   29716 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:24:43.283824   29716 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:24:43.283836   29716 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:24:43.283849   29716 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:24:43.283863   29716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:24:43.283874   29716 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1024 19:24:43.283884   29716 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:24:43.283894   29716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:24:43.283905   29716 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:24:43.283919   29716 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:24:43.283930   29716 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:24:43.283962   29716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:24:43.283972   29716 command_runner.go:130] > conmon_env = [
	I1024 19:24:43.283981   29716 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1024 19:24:43.283991   29716 command_runner.go:130] > ]
	I1024 19:24:43.284000   29716 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:24:43.284012   29716 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:24:43.284024   29716 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:24:43.284035   29716 command_runner.go:130] > # default_env = [
	I1024 19:24:43.284062   29716 command_runner.go:130] > # ]
	I1024 19:24:43.284080   29716 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:24:43.284088   29716 command_runner.go:130] > # selinux = false
	I1024 19:24:43.284102   29716 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:24:43.284115   29716 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:24:43.284128   29716 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:24:43.284135   29716 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:24:43.284149   29716 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:24:43.284161   29716 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:24:43.284174   29716 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:24:43.284189   29716 command_runner.go:130] > # which might increase security.
	I1024 19:24:43.284201   29716 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1024 19:24:43.284214   29716 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:24:43.284229   29716 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:24:43.284242   29716 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:24:43.284256   29716 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:24:43.284267   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:24:43.284626   29716 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:24:43.284642   29716 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:24:43.284649   29716 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:24:43.284685   29716 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:24:43.284703   29716 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:24:43.284711   29716 command_runner.go:130] > # irqbalance daemon.
	I1024 19:24:43.284793   29716 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:24:43.284814   29716 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:24:43.284824   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:24:43.284830   29716 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:24:43.284840   29716 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:24:43.284851   29716 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:24:43.284863   29716 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:24:43.284875   29716 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:24:43.284888   29716 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:24:43.284901   29716 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:24:43.284911   29716 command_runner.go:130] > # will be added.
	I1024 19:24:43.284918   29716 command_runner.go:130] > # default_capabilities = [
	I1024 19:24:43.284928   29716 command_runner.go:130] > # 	"CHOWN",
	I1024 19:24:43.284935   29716 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:24:43.284943   29716 command_runner.go:130] > # 	"FSETID",
	I1024 19:24:43.284953   29716 command_runner.go:130] > # 	"FOWNER",
	I1024 19:24:43.284966   29716 command_runner.go:130] > # 	"SETGID",
	I1024 19:24:43.285004   29716 command_runner.go:130] > # 	"SETUID",
	I1024 19:24:43.285016   29716 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:24:43.285023   29716 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:24:43.285029   29716 command_runner.go:130] > # 	"KILL",
	I1024 19:24:43.285038   29716 command_runner.go:130] > # ]
	I1024 19:24:43.285048   29716 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:24:43.285060   29716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:24:43.285069   29716 command_runner.go:130] > # default_sysctls = [
	I1024 19:24:43.285076   29716 command_runner.go:130] > # ]
	I1024 19:24:43.285090   29716 command_runner.go:130] > # List of devices on the host that a
	I1024 19:24:43.285103   29716 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:24:43.285113   29716 command_runner.go:130] > # allowed_devices = [
	I1024 19:24:43.285151   29716 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:24:43.285161   29716 command_runner.go:130] > # ]
	I1024 19:24:43.285169   29716 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:24:43.285185   29716 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:24:43.285197   29716 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:24:43.285246   29716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:24:43.285259   29716 command_runner.go:130] > # additional_devices = [
	I1024 19:24:43.285266   29716 command_runner.go:130] > # ]
	I1024 19:24:43.285277   29716 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:24:43.285287   29716 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:24:43.285304   29716 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:24:43.285314   29716 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:24:43.285319   29716 command_runner.go:130] > # ]
	I1024 19:24:43.285333   29716 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:24:43.285346   29716 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:24:43.285356   29716 command_runner.go:130] > # Defaults to false.
	I1024 19:24:43.285366   29716 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:24:43.285379   29716 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:24:43.285391   29716 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:24:43.285401   29716 command_runner.go:130] > # hooks_dir = [
	I1024 19:24:43.285408   29716 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:24:43.285417   29716 command_runner.go:130] > # ]
	I1024 19:24:43.285427   29716 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:24:43.285447   29716 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:24:43.285459   29716 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:24:43.285468   29716 command_runner.go:130] > #
	I1024 19:24:43.285478   29716 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:24:43.285491   29716 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:24:43.285502   29716 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:24:43.285508   29716 command_runner.go:130] > #
	I1024 19:24:43.285528   29716 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:24:43.285541   29716 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:24:43.285555   29716 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:24:43.285566   29716 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:24:43.285573   29716 command_runner.go:130] > #
	I1024 19:24:43.285583   29716 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:24:43.285592   29716 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:24:43.285605   29716 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:24:43.285612   29716 command_runner.go:130] > pids_limit = 1024
	I1024 19:24:43.285625   29716 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:24:43.285638   29716 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:24:43.285657   29716 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:24:43.285673   29716 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:24:43.285684   29716 command_runner.go:130] > # log_size_max = -1
	I1024 19:24:43.285698   29716 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:24:43.285739   29716 command_runner.go:130] > # log_to_journald = false
	I1024 19:24:43.285755   29716 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:24:43.285763   29716 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:24:43.285771   29716 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:24:43.285783   29716 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:24:43.285791   29716 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:24:43.285803   29716 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:24:43.285812   29716 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:24:43.285819   29716 command_runner.go:130] > # read_only = false
	I1024 19:24:43.285830   29716 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:24:43.285843   29716 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:24:43.285853   29716 command_runner.go:130] > # live configuration reload.
	I1024 19:24:43.285863   29716 command_runner.go:130] > # log_level = "info"
	I1024 19:24:43.285871   29716 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:24:43.285887   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:24:43.285895   29716 command_runner.go:130] > # log_filter = ""
	I1024 19:24:43.285904   29716 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:24:43.285914   29716 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:24:43.285925   29716 command_runner.go:130] > # separated by comma.
	I1024 19:24:43.285936   29716 command_runner.go:130] > # uid_mappings = ""
	I1024 19:24:43.285954   29716 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:24:43.285966   29716 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:24:43.285977   29716 command_runner.go:130] > # separated by comma.
	I1024 19:24:43.285983   29716 command_runner.go:130] > # gid_mappings = ""
	I1024 19:24:43.285996   29716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:24:43.286009   29716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:24:43.286021   29716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:24:43.286031   29716 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:24:43.286041   29716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:24:43.286053   29716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:24:43.286066   29716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:24:43.286076   29716 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:24:43.286090   29716 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:24:43.286103   29716 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:24:43.286113   29716 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:24:43.286123   29716 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:24:43.286136   29716 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:24:43.286148   29716 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:24:43.286159   29716 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:24:43.286170   29716 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:24:43.286177   29716 command_runner.go:130] > drop_infra_ctr = false
	I1024 19:24:43.286190   29716 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:24:43.286201   29716 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:24:43.286217   29716 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:24:43.286227   29716 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:24:43.286238   29716 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:24:43.286249   29716 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:24:43.286263   29716 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:24:43.286275   29716 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:24:43.286286   29716 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1024 19:24:43.286300   29716 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:24:43.286315   29716 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:24:43.286329   29716 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:24:43.286340   29716 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:24:43.286352   29716 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:24:43.286368   29716 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:24:43.286384   29716 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:24:43.286395   29716 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:24:43.286410   29716 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:24:43.286423   29716 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:24:43.286431   29716 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:24:43.286440   29716 command_runner.go:130] > # ]
	I1024 19:24:43.286451   29716 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:24:43.286465   29716 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:24:43.286479   29716 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:24:43.286493   29716 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:24:43.286499   29716 command_runner.go:130] > #
	I1024 19:24:43.286511   29716 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:24:43.286530   29716 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:24:43.286540   29716 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:24:43.286553   29716 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:24:43.286564   29716 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:24:43.286575   29716 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:24:43.286585   29716 command_runner.go:130] > # Where:
	I1024 19:24:43.286594   29716 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:24:43.286632   29716 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:24:43.286646   29716 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:24:43.286659   29716 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:24:43.286668   29716 command_runner.go:130] > #   in $PATH.
	I1024 19:24:43.286678   29716 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:24:43.286688   29716 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:24:43.286697   29716 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:24:43.286706   29716 command_runner.go:130] > #   state.
	I1024 19:24:43.286716   29716 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:24:43.286728   29716 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:24:43.286742   29716 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:24:43.286756   29716 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:24:43.286769   29716 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:24:43.286781   29716 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:24:43.286789   29716 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:24:43.286802   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:24:43.286817   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:24:43.286830   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:24:43.286843   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:24:43.286858   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:24:43.286872   29716 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:24:43.286886   29716 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:24:43.286900   29716 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:24:43.286911   29716 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:24:43.286922   29716 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:24:43.286932   29716 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1024 19:24:43.286942   29716 command_runner.go:130] > runtime_type = "oci"
	I1024 19:24:43.286951   29716 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:24:43.286961   29716 command_runner.go:130] > runtime_config_path = ""
	I1024 19:24:43.286975   29716 command_runner.go:130] > monitor_path = ""
	I1024 19:24:43.286985   29716 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:24:43.286993   29716 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:24:43.287006   29716 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:24:43.287016   29716 command_runner.go:130] > # running containers
	I1024 19:24:43.287023   29716 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:24:43.287037   29716 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:24:43.287100   29716 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:24:43.287112   29716 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:24:43.287124   29716 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:24:43.287132   29716 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:24:43.287143   29716 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:24:43.287153   29716 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:24:43.287164   29716 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:24:43.287174   29716 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:24:43.287185   29716 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:24:43.287197   29716 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:24:43.287210   29716 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:24:43.287228   29716 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:24:43.287244   29716 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:24:43.287255   29716 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:24:43.287271   29716 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:24:43.287285   29716 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:24:43.287298   29716 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:24:43.287309   29716 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:24:43.287318   29716 command_runner.go:130] > # Example:
	I1024 19:24:43.287327   29716 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:24:43.287336   29716 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:24:43.287344   29716 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:24:43.287356   29716 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:24:43.287365   29716 command_runner.go:130] > # cpuset = 0
	I1024 19:24:43.287372   29716 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:24:43.287381   29716 command_runner.go:130] > # Where:
	I1024 19:24:43.287416   29716 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:24:43.287436   29716 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:24:43.287448   29716 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:24:43.287469   29716 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:24:43.287485   29716 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:24:43.287536   29716 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:24:43.287545   29716 command_runner.go:130] > # 
	I1024 19:24:43.287556   29716 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:24:43.287565   29716 command_runner.go:130] > #
	I1024 19:24:43.287575   29716 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:24:43.287593   29716 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:24:43.287607   29716 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:24:43.287620   29716 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:24:43.287633   29716 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:24:43.287642   29716 command_runner.go:130] > [crio.image]
	I1024 19:24:43.287656   29716 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:24:43.287664   29716 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:24:43.287678   29716 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:24:43.287691   29716 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:24:43.287702   29716 command_runner.go:130] > # global_auth_file = ""
	I1024 19:24:43.287716   29716 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:24:43.287729   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:24:43.287739   29716 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:24:43.287748   29716 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:24:43.287760   29716 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:24:43.287772   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:24:43.287780   29716 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:24:43.287788   29716 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:24:43.287796   29716 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:24:43.287805   29716 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:24:43.287814   29716 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:24:43.287821   29716 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:24:43.287830   29716 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:24:43.287839   29716 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:24:43.287849   29716 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:24:43.287863   29716 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:24:43.287874   29716 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:24:43.287883   29716 command_runner.go:130] > # signature_policy = ""
	I1024 19:24:43.287893   29716 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:24:43.287911   29716 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:24:43.287918   29716 command_runner.go:130] > # changing them here.
	I1024 19:24:43.287923   29716 command_runner.go:130] > # insecure_registries = [
	I1024 19:24:43.287929   29716 command_runner.go:130] > # ]
	I1024 19:24:43.287935   29716 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:24:43.287942   29716 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:24:43.287949   29716 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:24:43.287960   29716 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:24:43.287971   29716 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:24:43.287983   29716 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:24:43.287993   29716 command_runner.go:130] > # CNI plugins.
	I1024 19:24:43.287999   29716 command_runner.go:130] > [crio.network]
	I1024 19:24:43.288011   29716 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:24:43.288023   29716 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:24:43.288034   29716 command_runner.go:130] > # cni_default_network = ""
	I1024 19:24:43.288045   29716 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:24:43.288057   29716 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:24:43.288067   29716 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:24:43.288079   29716 command_runner.go:130] > # plugin_dirs = [
	I1024 19:24:43.288086   29716 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:24:43.288089   29716 command_runner.go:130] > # ]
	I1024 19:24:43.288095   29716 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:24:43.288101   29716 command_runner.go:130] > [crio.metrics]
	I1024 19:24:43.288106   29716 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:24:43.288113   29716 command_runner.go:130] > enable_metrics = true
	I1024 19:24:43.288118   29716 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:24:43.288125   29716 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:24:43.288131   29716 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:24:43.288139   29716 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:24:43.288145   29716 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:24:43.288152   29716 command_runner.go:130] > # metrics_collectors = [
	I1024 19:24:43.288156   29716 command_runner.go:130] > # 	"operations",
	I1024 19:24:43.288161   29716 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:24:43.288166   29716 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:24:43.288171   29716 command_runner.go:130] > # 	"operations_errors",
	I1024 19:24:43.288177   29716 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:24:43.288187   29716 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:24:43.288194   29716 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:24:43.288198   29716 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:24:43.288204   29716 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:24:43.288208   29716 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:24:43.288213   29716 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:24:43.288217   29716 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:24:43.288221   29716 command_runner.go:130] > # 	"containers_oom",
	I1024 19:24:43.288225   29716 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:24:43.288230   29716 command_runner.go:130] > # 	"operations_total",
	I1024 19:24:43.288234   29716 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:24:43.288241   29716 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:24:43.288246   29716 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:24:43.288252   29716 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:24:43.288257   29716 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:24:43.288261   29716 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:24:43.288268   29716 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:24:43.288272   29716 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:24:43.288281   29716 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:24:43.288285   29716 command_runner.go:130] > # ]
	I1024 19:24:43.288290   29716 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:24:43.288294   29716 command_runner.go:130] > # metrics_port = 9090
	I1024 19:24:43.288303   29716 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:24:43.288309   29716 command_runner.go:130] > # metrics_socket = ""
	I1024 19:24:43.288314   29716 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:24:43.288323   29716 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:24:43.288329   29716 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:24:43.288336   29716 command_runner.go:130] > # certificate on any modification event.
	I1024 19:24:43.288340   29716 command_runner.go:130] > # metrics_cert = ""
	I1024 19:24:43.288344   29716 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:24:43.288351   29716 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:24:43.288355   29716 command_runner.go:130] > # metrics_key = ""
	I1024 19:24:43.288364   29716 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:24:43.288368   29716 command_runner.go:130] > [crio.tracing]
	I1024 19:24:43.288378   29716 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:24:43.288388   29716 command_runner.go:130] > # enable_tracing = false
	I1024 19:24:43.288404   29716 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:24:43.288415   29716 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:24:43.288427   29716 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:24:43.288435   29716 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:24:43.288447   29716 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:24:43.288457   29716 command_runner.go:130] > [crio.stats]
	I1024 19:24:43.288466   29716 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:24:43.288475   29716 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:24:43.288479   29716 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:24:43.288531   29716 command_runner.go:130] ! time="2023-10-24 19:24:43.258489521Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1024 19:24:43.288549   29716 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:24:43.288618   29716 cni.go:84] Creating CNI manager for ""
	I1024 19:24:43.288630   29716 cni.go:136] 1 nodes found, recommending kindnet
	I1024 19:24:43.288646   29716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:24:43.288663   29716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-632589 NodeName:multinode-632589 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:24:43.288779   29716 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-632589"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:24:43.288841   29716 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-632589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:24:43.288887   29716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:24:43.297352   29716 command_runner.go:130] > kubeadm
	I1024 19:24:43.297370   29716 command_runner.go:130] > kubectl
	I1024 19:24:43.297377   29716 command_runner.go:130] > kubelet
	I1024 19:24:43.297480   29716 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:24:43.297543   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:24:43.305455   29716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1024 19:24:43.320571   29716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:24:43.335139   29716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1024 19:24:43.349904   29716 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I1024 19:24:43.353196   29716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:24:43.364867   29716 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589 for IP: 192.168.39.247
	I1024 19:24:43.364892   29716 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.365027   29716 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:24:43.365061   29716 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:24:43.365100   29716 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key
	I1024 19:24:43.365120   29716 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt with IP's: []
	I1024 19:24:43.539108   29716 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt ...
	I1024 19:24:43.539136   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt: {Name:mk767fc4ec964922cad62565405d70fc2cf52f3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.539302   29716 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key ...
	I1024 19:24:43.539312   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key: {Name:mkf80497cfe197cd766c69869594497400482ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.539378   29716 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key.890e8c75
	I1024 19:24:43.539392   29716 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt.890e8c75 with IP's: [192.168.39.247 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:24:43.742937   29716 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt.890e8c75 ...
	I1024 19:24:43.742964   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt.890e8c75: {Name:mk89b93ddda7249e0238d8965e01b168d492378b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.743108   29716 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key.890e8c75 ...
	I1024 19:24:43.743120   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key.890e8c75: {Name:mkc574ad6d0fbf11eafffb0e63719b6c11462948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.743184   29716 certs.go:337] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt.890e8c75 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt
	I1024 19:24:43.743259   29716 certs.go:341] copying /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key.890e8c75 -> /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key
	I1024 19:24:43.743310   29716 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key
	I1024 19:24:43.743323   29716 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt with IP's: []
	I1024 19:24:43.835660   29716 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt ...
	I1024 19:24:43.835689   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt: {Name:mke70d787440d1d5b9d246a05809ff6e39a0188e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.835833   29716 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key ...
	I1024 19:24:43.835842   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key: {Name:mka1d6e0e3f2320a68f2be3fcbf3c5578053e48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:43.835904   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:24:43.835920   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:24:43.835930   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:24:43.835939   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:24:43.835956   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:24:43.835968   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:24:43.835980   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:24:43.835993   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:24:43.836039   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:24:43.836071   29716 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:24:43.836089   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:24:43.836141   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:24:43.836169   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:24:43.836194   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:24:43.836237   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:24:43.836267   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:43.836279   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem -> /usr/share/ca-certificates/16298.pem
	I1024 19:24:43.836291   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /usr/share/ca-certificates/162982.pem
	I1024 19:24:43.836853   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:24:43.864220   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:24:43.888842   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:24:43.911422   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:24:43.933467   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:24:43.955419   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:24:43.977600   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:24:44.000248   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:24:44.023595   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:24:44.049072   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:24:44.072134   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:24:44.094316   29716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:24:44.109996   29716 ssh_runner.go:195] Run: openssl version
	I1024 19:24:44.115722   29716 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1024 19:24:44.115790   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:24:44.125610   29716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:24:44.129998   29716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:24:44.130084   29716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:24:44.130136   29716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:24:44.135283   29716 command_runner.go:130] > 3ec20f2e
	I1024 19:24:44.135343   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:24:44.144539   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:24:44.154034   29716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:44.158260   29716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:44.158287   29716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:44.158335   29716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:44.163214   29716 command_runner.go:130] > b5213941
	I1024 19:24:44.163397   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:24:44.173362   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:24:44.183823   29716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:24:44.188809   29716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:24:44.188977   29716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:24:44.189025   29716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:24:44.194199   29716 command_runner.go:130] > 51391683
	I1024 19:24:44.194261   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:24:44.203180   29716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:24:44.206844   29716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:24:44.207054   29716 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:24:44.207102   29716 kubeadm.go:404] StartCluster: {Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:24:44.207193   29716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:24:44.207236   29716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:24:44.245851   29716 cri.go:89] found id: ""
	I1024 19:24:44.245899   29716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:24:44.254193   29716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1024 19:24:44.254210   29716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1024 19:24:44.254217   29716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1024 19:24:44.254276   29716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:24:44.262163   29716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:24:44.271558   29716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1024 19:24:44.271586   29716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1024 19:24:44.271597   29716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1024 19:24:44.271607   29716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:24:44.271678   29716 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:24:44.271708   29716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1024 19:24:44.376235   29716 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:24:44.376263   29716 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1024 19:24:44.376310   29716 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:24:44.376337   29716 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 19:24:44.612397   29716 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:24:44.612438   29716 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:24:44.612523   29716 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:24:44.612535   29716 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:24:44.612702   29716 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:24:44.612728   29716 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:24:44.845002   29716 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:24:44.845099   29716 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:24:45.011561   29716 out.go:204]   - Generating certificates and keys ...
	I1024 19:24:45.011723   29716 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1024 19:24:45.011739   29716 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:24:45.011838   29716 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1024 19:24:45.011855   29716 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:24:45.011939   29716 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:24:45.011951   29716 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:24:45.262186   29716 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:24:45.262208   29716 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:24:45.406551   29716 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:24:45.406574   29716 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1024 19:24:45.647764   29716 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:24:45.647803   29716 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1024 19:24:45.809035   29716 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:24:45.809069   29716 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1024 19:24:45.809220   29716 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-632589] and IPs [192.168.39.247 127.0.0.1 ::1]
	I1024 19:24:45.809245   29716 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-632589] and IPs [192.168.39.247 127.0.0.1 ::1]
	I1024 19:24:46.076409   29716 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:24:46.076437   29716 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1024 19:24:46.076586   29716 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-632589] and IPs [192.168.39.247 127.0.0.1 ::1]
	I1024 19:24:46.076611   29716 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-632589] and IPs [192.168.39.247 127.0.0.1 ::1]
	I1024 19:24:46.161403   29716 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:24:46.161436   29716 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:24:46.299707   29716 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:24:46.299737   29716 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:24:46.520531   29716 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:24:46.520556   29716 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1024 19:24:46.520938   29716 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:24:46.520957   29716 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:24:46.643821   29716 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:24:46.643844   29716 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:24:46.859967   29716 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:24:46.859999   29716 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:24:46.983395   29716 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:24:46.983418   29716 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:24:47.148490   29716 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:24:47.148513   29716 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:24:47.149196   29716 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:24:47.149219   29716 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:24:47.152260   29716 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:24:47.154171   29716 out.go:204]   - Booting up control plane ...
	I1024 19:24:47.152350   29716 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:24:47.154301   29716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:24:47.154313   29716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:24:47.154394   29716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:24:47.154408   29716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:24:47.154502   29716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:24:47.154526   29716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:24:47.169739   29716 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:24:47.169767   29716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:24:47.169900   29716 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:24:47.169916   29716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:24:47.169968   29716 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:24:47.169980   29716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:24:47.283068   29716 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:24:47.283089   29716 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:24:54.781535   29716 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503142 seconds
	I1024 19:24:54.781548   29716 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.503142 seconds
	I1024 19:24:54.781704   29716 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:24:54.781729   29716 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:24:54.798900   29716 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:24:54.798925   29716 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:24:55.326682   29716 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:24:55.326721   29716 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:24:55.326935   29716 kubeadm.go:322] [mark-control-plane] Marking the node multinode-632589 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:24:55.326958   29716 command_runner.go:130] > [mark-control-plane] Marking the node multinode-632589 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:24:55.841155   29716 kubeadm.go:322] [bootstrap-token] Using token: wh7fi4.bfhbraqg6w5gexra
	I1024 19:24:55.841194   29716 command_runner.go:130] > [bootstrap-token] Using token: wh7fi4.bfhbraqg6w5gexra
	I1024 19:24:55.842655   29716 out.go:204]   - Configuring RBAC rules ...
	I1024 19:24:55.842792   29716 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:24:55.842808   29716 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:24:55.848144   29716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:24:55.848169   29716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:24:55.855483   29716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:24:55.855499   29716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:24:55.858614   29716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:24:55.858627   29716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:24:55.865199   29716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:24:55.865212   29716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:24:55.868606   29716 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:24:55.868618   29716 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:24:55.889087   29716 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:24:55.889116   29716 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:24:56.152731   29716 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:24:56.152758   29716 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1024 19:24:56.257302   29716 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:24:56.257333   29716 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1024 19:24:56.258741   29716 kubeadm.go:322] 
	I1024 19:24:56.258831   29716 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:24:56.258849   29716 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1024 19:24:56.258856   29716 kubeadm.go:322] 
	I1024 19:24:56.258937   29716 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:24:56.258958   29716 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1024 19:24:56.258963   29716 kubeadm.go:322] 
	I1024 19:24:56.258986   29716 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:24:56.258994   29716 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1024 19:24:56.259068   29716 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:24:56.259076   29716 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:24:56.259154   29716 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:24:56.259164   29716 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:24:56.259170   29716 kubeadm.go:322] 
	I1024 19:24:56.259344   29716 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1024 19:24:56.259364   29716 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:24:56.259373   29716 kubeadm.go:322] 
	I1024 19:24:56.259453   29716 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:24:56.259463   29716 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:24:56.259479   29716 kubeadm.go:322] 
	I1024 19:24:56.259561   29716 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1024 19:24:56.259573   29716 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:24:56.259680   29716 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:24:56.259698   29716 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:24:56.259801   29716 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:24:56.259819   29716 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:24:56.259826   29716 kubeadm.go:322] 
	I1024 19:24:56.259927   29716 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:24:56.259940   29716 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:24:56.260047   29716 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1024 19:24:56.260060   29716 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:24:56.260067   29716 kubeadm.go:322] 
	I1024 19:24:56.260268   29716 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token wh7fi4.bfhbraqg6w5gexra \
	I1024 19:24:56.260284   29716 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wh7fi4.bfhbraqg6w5gexra \
	I1024 19:24:56.260424   29716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 19:24:56.260436   29716 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 19:24:56.260468   29716 command_runner.go:130] > 	--control-plane 
	I1024 19:24:56.260477   29716 kubeadm.go:322] 	--control-plane 
	I1024 19:24:56.260496   29716 kubeadm.go:322] 
	I1024 19:24:56.260630   29716 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:24:56.260639   29716 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:24:56.260649   29716 kubeadm.go:322] 
	I1024 19:24:56.260787   29716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wh7fi4.bfhbraqg6w5gexra \
	I1024 19:24:56.260809   29716 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wh7fi4.bfhbraqg6w5gexra \
	I1024 19:24:56.260969   29716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:24:56.260980   29716 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:24:56.261350   29716 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:24:56.261366   29716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:24:56.261385   29716 cni.go:84] Creating CNI manager for ""
	I1024 19:24:56.261393   29716 cni.go:136] 1 nodes found, recommending kindnet
	I1024 19:24:56.263044   29716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:24:56.264441   29716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:24:56.290304   29716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:24:56.290335   29716 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1024 19:24:56.290368   29716 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1024 19:24:56.290384   29716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:24:56.290393   29716 command_runner.go:130] > Access: 2023-10-24 19:24:24.936689766 +0000
	I1024 19:24:56.290404   29716 command_runner.go:130] > Modify: 2023-10-16 21:25:26.000000000 +0000
	I1024 19:24:56.290413   29716 command_runner.go:130] > Change: 2023-10-24 19:24:23.066689766 +0000
	I1024 19:24:56.290420   29716 command_runner.go:130] >  Birth: -
	I1024 19:24:56.295944   29716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:24:56.295966   29716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:24:56.327511   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:24:57.248219   29716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1024 19:24:57.254509   29716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1024 19:24:57.263476   29716 command_runner.go:130] > serviceaccount/kindnet created
	I1024 19:24:57.279108   29716 command_runner.go:130] > daemonset.apps/kindnet created
	I1024 19:24:57.281790   29716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:24:57.281912   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:57.281964   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=multinode-632589 minikube.k8s.io/updated_at=2023_10_24T19_24_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:57.330288   29716 command_runner.go:130] > -16
	I1024 19:24:57.330330   29716 ops.go:34] apiserver oom_adj: -16
	I1024 19:24:57.515913   29716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1024 19:24:57.518664   29716 command_runner.go:130] > node/multinode-632589 labeled
	I1024 19:24:57.518823   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:57.599511   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:24:57.601725   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:57.688140   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:24:58.188991   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:58.278241   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:24:58.688593   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:58.774089   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:24:59.188632   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:59.273129   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:24:59.688302   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:59.764284   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:00.189364   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:00.273410   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:00.689120   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:00.770702   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:01.189339   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:01.268907   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:01.689289   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:01.775209   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:02.188500   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:02.279513   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:02.688444   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:02.772136   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:03.188411   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:03.270641   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:03.689340   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:03.771418   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:04.188494   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:04.273060   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:04.688768   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:04.786605   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:05.189248   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:05.279493   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:05.689180   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:05.778905   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:06.189224   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:06.284028   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:06.688610   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:06.787311   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:07.188761   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:07.292338   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:07.688670   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:07.777619   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:08.188855   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:08.283375   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:08.689063   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:08.773093   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:09.189234   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:09.290164   29716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:25:09.688665   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:25:09.800012   29716 command_runner.go:130] > NAME      SECRETS   AGE
	I1024 19:25:09.800738   29716 command_runner.go:130] > default   0         0s
	I1024 19:25:09.802777   29716 kubeadm.go:1081] duration metric: took 12.520926507s to wait for elevateKubeSystemPrivileges.
	I1024 19:25:09.802810   29716 kubeadm.go:406] StartCluster complete in 25.595712841s
	I1024 19:25:09.802839   29716 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:25:09.802904   29716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:25:09.803599   29716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:25:09.803826   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:25:09.803972   29716 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:25:09.804048   29716 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:25:09.804050   29716 addons.go:69] Setting storage-provisioner=true in profile "multinode-632589"
	I1024 19:25:09.804076   29716 addons.go:231] Setting addon storage-provisioner=true in "multinode-632589"
	I1024 19:25:09.804091   29716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:25:09.804138   29716 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:25:09.804079   29716 addons.go:69] Setting default-storageclass=true in profile "multinode-632589"
	I1024 19:25:09.804175   29716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-632589"
	I1024 19:25:09.804403   29716 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:25:09.804564   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:09.804590   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:09.804484   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:09.804672   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:09.805276   29716 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:25:09.805536   29716 round_trippers.go:463] GET https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:25:09.805549   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:09.805557   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:09.805562   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:09.818886   29716 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1024 19:25:09.818906   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:09.818916   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:09.818926   29716 round_trippers.go:580]     Content-Length: 291
	I1024 19:25:09.818934   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:09 GMT
	I1024 19:25:09.818943   29716 round_trippers.go:580]     Audit-Id: c10a1f6b-5f10-42ea-8686-46f52f89e0d1
	I1024 19:25:09.818951   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:09.818960   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:09.818969   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:09.818994   29716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"334","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 19:25:09.819403   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1024 19:25:09.819411   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I1024 19:25:09.819498   29716 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"334","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 19:25:09.819562   29716 round_trippers.go:463] PUT https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:25:09.819574   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:09.819591   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:09.819600   29716 round_trippers.go:473]     Content-Type: application/json
	I1024 19:25:09.819609   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:09.819815   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:09.819860   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:09.820321   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:09.820338   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:09.820446   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:09.820472   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:09.820697   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:09.820780   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:09.820834   29716 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:25:09.821198   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:09.821224   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:09.822891   29716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:25:09.823230   29716 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:25:09.823518   29716 addons.go:231] Setting addon default-storageclass=true in "multinode-632589"
	I1024 19:25:09.823551   29716 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:25:09.823956   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:09.823985   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:09.835219   29716 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1024 19:25:09.835245   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:09.835255   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:09 GMT
	I1024 19:25:09.835263   29716 round_trippers.go:580]     Audit-Id: f7c1a0ec-5ac7-4e3a-bacf-2cc3f8f2b92b
	I1024 19:25:09.835271   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:09.835284   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:09.835291   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:09.835298   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:09.835311   29716 round_trippers.go:580]     Content-Length: 291
	I1024 19:25:09.835348   29716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"350","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 19:25:09.835517   29716 round_trippers.go:463] GET https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:25:09.835535   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:09.835545   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:09.835554   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:09.835873   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I1024 19:25:09.836399   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:09.836984   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:09.837007   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:09.837344   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:09.837554   29716 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:25:09.838390   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I1024 19:25:09.838841   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:09.839297   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:09.839317   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:09.839380   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:25:09.839608   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:09.841136   29716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:25:09.840056   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:09.842524   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:09.842618   29716 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:25:09.842635   29716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:25:09.842653   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:25:09.845643   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:25:09.846095   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:25:09.846130   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:25:09.846383   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:25:09.846575   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:25:09.846772   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:25:09.846934   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:25:09.852409   29716 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1024 19:25:09.852428   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:09.852438   29716 round_trippers.go:580]     Content-Length: 291
	I1024 19:25:09.852450   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:09 GMT
	I1024 19:25:09.852458   29716 round_trippers.go:580]     Audit-Id: 01bd3d36-8f81-4ee2-970c-7de44accd531
	I1024 19:25:09.852469   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:09.852488   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:09.852503   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:09.852512   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:09.853553   29716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"350","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 19:25:09.853732   29716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-632589" context rescaled to 1 replicas
	I1024 19:25:09.853779   29716 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:25:09.855601   29716 out.go:177] * Verifying Kubernetes components...
	I1024 19:25:09.856972   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:25:09.858217   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I1024 19:25:09.858571   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:09.859037   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:09.859059   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:09.859335   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:09.859538   29716 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:25:09.860931   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:25:09.861192   29716 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:25:09.861207   29716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:25:09.861220   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:25:09.863727   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:25:09.864139   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:25:09.864167   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:25:09.864418   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:25:09.864594   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:25:09.864784   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:25:09.864951   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:25:09.985239   29716 command_runner.go:130] > apiVersion: v1
	I1024 19:25:09.985262   29716 command_runner.go:130] > data:
	I1024 19:25:09.985270   29716 command_runner.go:130] >   Corefile: |
	I1024 19:25:09.985308   29716 command_runner.go:130] >     .:53 {
	I1024 19:25:09.985316   29716 command_runner.go:130] >         errors
	I1024 19:25:09.985323   29716 command_runner.go:130] >         health {
	I1024 19:25:09.985329   29716 command_runner.go:130] >            lameduck 5s
	I1024 19:25:09.985335   29716 command_runner.go:130] >         }
	I1024 19:25:09.985341   29716 command_runner.go:130] >         ready
	I1024 19:25:09.985351   29716 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1024 19:25:09.985359   29716 command_runner.go:130] >            pods insecure
	I1024 19:25:09.985368   29716 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1024 19:25:09.985375   29716 command_runner.go:130] >            ttl 30
	I1024 19:25:09.985384   29716 command_runner.go:130] >         }
	I1024 19:25:09.985393   29716 command_runner.go:130] >         prometheus :9153
	I1024 19:25:09.985398   29716 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1024 19:25:09.985404   29716 command_runner.go:130] >            max_concurrent 1000
	I1024 19:25:09.985410   29716 command_runner.go:130] >         }
	I1024 19:25:09.985416   29716 command_runner.go:130] >         cache 30
	I1024 19:25:09.985423   29716 command_runner.go:130] >         loop
	I1024 19:25:09.985429   29716 command_runner.go:130] >         reload
	I1024 19:25:09.985440   29716 command_runner.go:130] >         loadbalance
	I1024 19:25:09.985447   29716 command_runner.go:130] >     }
	I1024 19:25:09.985465   29716 command_runner.go:130] > kind: ConfigMap
	I1024 19:25:09.985471   29716 command_runner.go:130] > metadata:
	I1024 19:25:09.985486   29716 command_runner.go:130] >   creationTimestamp: "2023-10-24T19:24:56Z"
	I1024 19:25:09.985491   29716 command_runner.go:130] >   name: coredns
	I1024 19:25:09.985495   29716 command_runner.go:130] >   namespace: kube-system
	I1024 19:25:09.985500   29716 command_runner.go:130] >   resourceVersion: "253"
	I1024 19:25:09.985512   29716 command_runner.go:130] >   uid: 2aabb006-845c-4eef-a802-37bc2ba3f811
	I1024 19:25:09.987085   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:25:09.987310   29716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:25:09.987632   29716 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:25:09.987955   29716 node_ready.go:35] waiting up to 6m0s for node "multinode-632589" to be "Ready" ...
	I1024 19:25:09.988090   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:09.988104   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:09.988115   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:09.988123   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:10.011983   29716 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1024 19:25:10.012009   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:10.012017   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:10.012022   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:10.012027   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:10.012032   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:10.012037   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:10 GMT
	I1024 19:25:10.012043   29716 round_trippers.go:580]     Audit-Id: 3a7dc55b-fa8e-479d-8426-19e5f991a105
	I1024 19:25:10.012152   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"336","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:2
4:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5989 chars]
	I1024 19:25:10.013508   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:10.013527   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:10.013538   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:10.013547   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:10.016585   29716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:25:10.063695   29716 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1024 19:25:10.063722   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:10.063792   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:10.063811   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:10.063820   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:10.063836   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:10 GMT
	I1024 19:25:10.063848   29716 round_trippers.go:580]     Audit-Id: c24ca5ef-8878-4608-afb3-8780d88d97b0
	I1024 19:25:10.063856   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:10.091603   29716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:25:10.096444   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"336","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:2
4:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5989 chars]
	I1024 19:25:10.597629   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:10.597653   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:10.597662   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:10.597671   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:10.719301   29716 round_trippers.go:574] Response Status: 200 OK in 121 milliseconds
	I1024 19:25:10.719328   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:10.719336   29716 round_trippers.go:580]     Audit-Id: a88807e6-9ed2-47e3-9d8c-0cd37f07275f
	I1024 19:25:10.719342   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:10.719347   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:10.719352   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:10.719357   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:10.719361   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:10 GMT
	I1024 19:25:10.721280   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:10.811023   29716 command_runner.go:130] > configmap/coredns replaced
	I1024 19:25:10.814096   29716 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 19:25:11.015774   29716 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1024 19:25:11.015801   29716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1024 19:25:11.015814   29716 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1024 19:25:11.015829   29716 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1024 19:25:11.015837   29716 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1024 19:25:11.015846   29716 command_runner.go:130] > pod/storage-provisioner created
	I1024 19:25:11.015878   29716 main.go:141] libmachine: Making call to close driver server
	I1024 19:25:11.015901   29716 main.go:141] libmachine: (multinode-632589) Calling .Close
	I1024 19:25:11.015946   29716 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1024 19:25:11.016004   29716 main.go:141] libmachine: Making call to close driver server
	I1024 19:25:11.016022   29716 main.go:141] libmachine: (multinode-632589) Calling .Close
	I1024 19:25:11.016204   29716 main.go:141] libmachine: (multinode-632589) DBG | Closing plugin on server side
	I1024 19:25:11.016242   29716 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:25:11.016252   29716 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:25:11.016262   29716 main.go:141] libmachine: Making call to close driver server
	I1024 19:25:11.016272   29716 main.go:141] libmachine: (multinode-632589) Calling .Close
	I1024 19:25:11.016288   29716 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:25:11.016307   29716 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:25:11.016316   29716 main.go:141] libmachine: Making call to close driver server
	I1024 19:25:11.016330   29716 main.go:141] libmachine: (multinode-632589) Calling .Close
	I1024 19:25:11.017892   29716 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:25:11.017912   29716 main.go:141] libmachine: (multinode-632589) DBG | Closing plugin on server side
	I1024 19:25:11.017915   29716 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:25:11.017926   29716 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:25:11.017929   29716 main.go:141] libmachine: (multinode-632589) DBG | Closing plugin on server side
	I1024 19:25:11.017961   29716 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:25:11.018049   29716 round_trippers.go:463] GET https://192.168.39.247:8443/apis/storage.k8s.io/v1/storageclasses
	I1024 19:25:11.018072   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:11.018091   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:11.018101   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:11.028949   29716 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1024 19:25:11.028967   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:11.028977   29716 round_trippers.go:580]     Audit-Id: 68649822-a697-44f8-8042-d981c26b390a
	I1024 19:25:11.028985   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:11.028993   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:11.029001   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:11.029014   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:11.029026   29716 round_trippers.go:580]     Content-Length: 1273
	I1024 19:25:11.029036   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:11 GMT
	I1024 19:25:11.029102   29716 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"cefcb24b-6919-4725-bf68-105d10e2dc56","resourceVersion":"394","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1024 19:25:11.029544   29716 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"cefcb24b-6919-4725-bf68-105d10e2dc56","resourceVersion":"394","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1024 19:25:11.029598   29716 round_trippers.go:463] PUT https://192.168.39.247:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1024 19:25:11.029609   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:11.029620   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:11.029631   29716 round_trippers.go:473]     Content-Type: application/json
	I1024 19:25:11.029644   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:11.034474   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:11.034491   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:11.034501   29716 round_trippers.go:580]     Audit-Id: 37e6b093-c003-419f-aad4-3b2b6e898b99
	I1024 19:25:11.034510   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:11.034523   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:11.034535   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:11.034549   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:11.034564   29716 round_trippers.go:580]     Content-Length: 1220
	I1024 19:25:11.034576   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:11 GMT
	I1024 19:25:11.034622   29716 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"cefcb24b-6919-4725-bf68-105d10e2dc56","resourceVersion":"394","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1024 19:25:11.034755   29716 main.go:141] libmachine: Making call to close driver server
	I1024 19:25:11.034770   29716 main.go:141] libmachine: (multinode-632589) Calling .Close
	I1024 19:25:11.035020   29716 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:25:11.035037   29716 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:25:11.037679   29716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 19:25:11.038830   29716 addons.go:502] enable addons completed in 1.234858616s: enabled=[storage-provisioner default-storageclass]
	I1024 19:25:11.097613   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:11.097634   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:11.097642   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:11.097648   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:11.100294   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:11.100319   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:11.100329   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:11 GMT
	I1024 19:25:11.100338   29716 round_trippers.go:580]     Audit-Id: daa89a04-47e5-46cd-9789-a87476946c92
	I1024 19:25:11.100346   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:11.100354   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:11.100362   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:11.100370   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:11.100741   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:11.597191   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:11.597213   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:11.597221   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:11.597227   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:11.599936   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:11.599956   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:11.599963   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:11.599971   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:11 GMT
	I1024 19:25:11.599980   29716 round_trippers.go:580]     Audit-Id: 13d18fd6-5401-41cc-8485-9771cb8cd8ca
	I1024 19:25:11.599992   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:11.600000   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:11.600012   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:11.600177   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:12.097884   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:12.097917   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:12.097926   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:12.097931   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:12.100644   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:12.100661   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:12.100668   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:12.100673   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:12 GMT
	I1024 19:25:12.100678   29716 round_trippers.go:580]     Audit-Id: 88467e09-255e-43fb-973f-7c774a90d598
	I1024 19:25:12.100683   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:12.100689   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:12.100695   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:12.100848   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:12.101169   29716 node_ready.go:58] node "multinode-632589" has status "Ready":"False"
	I1024 19:25:12.597189   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:12.597212   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:12.597223   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:12.597232   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:12.599617   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:12.599631   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:12.599642   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:12.599649   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:12 GMT
	I1024 19:25:12.599657   29716 round_trippers.go:580]     Audit-Id: 992e0b5c-414c-4fc6-a3c1-e64b09ad6544
	I1024 19:25:12.599666   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:12.599684   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:12.599698   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:12.599813   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:13.097143   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:13.097170   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:13.097180   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:13.097186   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:13.099709   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:13.099727   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:13.099734   29716 round_trippers.go:580]     Audit-Id: a2fdcc74-0e60-46d7-b7e2-f30c5c86393f
	I1024 19:25:13.099740   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:13.099745   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:13.099750   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:13.099755   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:13.099760   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:13 GMT
	I1024 19:25:13.099904   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:13.597518   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:13.597539   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:13.597548   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:13.597554   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:13.600529   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:13.600549   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:13.600556   29716 round_trippers.go:580]     Audit-Id: 85961b35-6394-4125-8370-645876743922
	I1024 19:25:13.600562   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:13.600582   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:13.600590   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:13.600599   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:13.600609   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:13 GMT
	I1024 19:25:13.601181   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:14.097872   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:14.097896   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:14.097904   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:14.097910   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:14.100680   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:14.100702   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:14.100710   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:14.100716   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:14 GMT
	I1024 19:25:14.100725   29716 round_trippers.go:580]     Audit-Id: e1216ce8-d5ba-4732-a8ee-b5a093454a31
	I1024 19:25:14.100733   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:14.100741   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:14.100754   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:14.101392   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:14.101710   29716 node_ready.go:58] node "multinode-632589" has status "Ready":"False"
	I1024 19:25:14.597097   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:14.597122   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:14.597130   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:14.597136   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:14.600755   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:14.600775   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:14.600782   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:14.600787   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:14.600792   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:14.600798   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:14.600803   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:14 GMT
	I1024 19:25:14.600808   29716 round_trippers.go:580]     Audit-Id: 374fa832-af5f-4136-a8a3-dadba78e66d2
	I1024 19:25:14.600995   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:15.097221   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:15.097246   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:15.097254   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:15.097260   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:15.099998   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:15.100022   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:15.100032   29716 round_trippers.go:580]     Audit-Id: 25228736-c6f5-4a49-b4a8-8fa9539cc62d
	I1024 19:25:15.100041   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:15.100049   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:15.100058   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:15.100066   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:15.100072   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:15 GMT
	I1024 19:25:15.100366   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:15.597027   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:15.597052   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:15.597060   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:15.597067   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:15.600836   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:15.600865   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:15.600876   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:15.600885   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:15 GMT
	I1024 19:25:15.600893   29716 round_trippers.go:580]     Audit-Id: a2ccf99c-5ac1-4191-81ec-a432aae72e7f
	I1024 19:25:15.600901   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:15.600910   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:15.600919   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:15.601726   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:16.097358   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:16.097382   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:16.097390   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:16.097396   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:16.100316   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:16.100338   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:16.100348   29716 round_trippers.go:580]     Audit-Id: 02b569c4-77d4-4e7a-9530-0405bcda114b
	I1024 19:25:16.100356   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:16.100364   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:16.100373   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:16.100382   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:16.100394   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:16 GMT
	I1024 19:25:16.100525   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:16.597122   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:16.597145   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:16.597153   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:16.597159   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:16.599807   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:16.599833   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:16.599841   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:16.599847   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:16.599852   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:16.599858   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:16 GMT
	I1024 19:25:16.599863   29716 round_trippers.go:580]     Audit-Id: 4832eeb8-24c3-4366-9984-cd6f2231fea2
	I1024 19:25:16.599868   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:16.600059   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"366","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1024 19:25:16.600576   29716 node_ready.go:58] node "multinode-632589" has status "Ready":"False"
	I1024 19:25:17.097474   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:17.097496   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.097508   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.097519   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.100239   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:17.100261   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.100270   29716 round_trippers.go:580]     Audit-Id: 21d27723-eab1-48b0-9f37-6d52dc755e85
	I1024 19:25:17.100277   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.100288   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.100297   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.100305   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.100312   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.100548   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:17.100949   29716 node_ready.go:49] node "multinode-632589" has status "Ready":"True"
	I1024 19:25:17.100966   29716 node_ready.go:38] duration metric: took 7.11295935s waiting for node "multinode-632589" to be "Ready" ...
	I1024 19:25:17.100979   29716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:17.101051   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:25:17.101060   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.101071   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.101082   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.107568   29716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1024 19:25:17.107590   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.107604   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.107612   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.107621   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.107636   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.107645   29716 round_trippers.go:580]     Audit-Id: 29ceac85-7b14-423a-a4b0-3730647a0cff
	I1024 19:25:17.107656   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.108546   29716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"426","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53917 chars]
	I1024 19:25:17.113214   29716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:17.113313   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:25:17.113325   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.113335   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.113346   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.117760   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:17.117775   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.117782   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.117787   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.117792   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.117797   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.117802   29716 round_trippers.go:580]     Audit-Id: 1312ff83-ab84-4073-80ff-056b26198181
	I1024 19:25:17.117809   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.117961   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"426","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1024 19:25:17.118450   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:17.118471   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.118482   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.118490   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.124874   29716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1024 19:25:17.124890   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.124896   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.124902   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.124907   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.124912   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.124916   29716 round_trippers.go:580]     Audit-Id: 7c0583ff-8506-49d3-94be-facc86e1a45d
	I1024 19:25:17.124931   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.125306   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:17.125631   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:25:17.125643   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.125650   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.125656   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.129190   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:17.129209   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.129218   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.129226   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.129236   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.129245   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.129256   29716 round_trippers.go:580]     Audit-Id: 5ef758ef-e04f-4c5a-b1ae-ff6a11d47ec3
	I1024 19:25:17.129266   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.129649   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"426","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1024 19:25:17.130121   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:17.130141   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.130152   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.130161   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.132767   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:17.132780   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.132786   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.132793   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.132803   29716 round_trippers.go:580]     Audit-Id: 6ea13437-0dbc-4ced-8c24-b7f0add8909b
	I1024 19:25:17.132811   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.132822   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.132834   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.134027   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:17.634883   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:25:17.634906   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.634915   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.634920   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.638305   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:17.638329   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.638340   29716 round_trippers.go:580]     Audit-Id: fbbbc737-82e1-40dd-aef8-f4b2b3d79a7b
	I1024 19:25:17.638350   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.638364   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.638373   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.638384   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.638393   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.638565   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"426","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1024 19:25:17.639122   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:17.639141   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:17.639152   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:17.639162   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:17.643254   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:17.643270   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:17.643280   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:17.643288   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:17.643299   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:17 GMT
	I1024 19:25:17.643312   29716 round_trippers.go:580]     Audit-Id: a9f26003-baf2-4058-8a61-67699dc303ea
	I1024 19:25:17.643322   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:17.643339   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:17.643591   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:18.135271   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:25:18.135291   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.135299   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.135306   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.137847   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.137872   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.137882   29716 round_trippers.go:580]     Audit-Id: d46201b5-b89c-4bf9-9008-cdb1ea9b17b1
	I1024 19:25:18.137894   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.137905   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.137916   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.137925   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.137935   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.138306   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"426","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1024 19:25:18.138759   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:18.138772   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.138780   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.138786   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.140609   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:18.140621   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.140627   29716 round_trippers.go:580]     Audit-Id: b23bac08-cad7-4481-b1d9-cf9e01b84ef8
	I1024 19:25:18.140635   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.140641   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.140646   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.140651   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.140657   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.141051   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:18.634667   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:25:18.634702   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.634710   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.634716   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.637275   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.637311   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.637322   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.637334   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.637339   29716 round_trippers.go:580]     Audit-Id: 9fe011d2-7ea6-4f7c-a112-274859c80038
	I1024 19:25:18.637345   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.637350   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.637356   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.637779   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"436","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1024 19:25:18.638348   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:18.638363   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.638371   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.638377   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.640769   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.640782   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.640787   29716 round_trippers.go:580]     Audit-Id: a34a5886-8c99-4c05-94b7-29c50c6d03c7
	I1024 19:25:18.640793   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.640800   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.640808   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.640817   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.640828   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.641200   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:18.641584   29716 pod_ready.go:92] pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:18.641603   29716 pod_ready.go:81] duration metric: took 1.528361752s waiting for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.641617   29716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.641675   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-632589
	I1024 19:25:18.641686   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.641696   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.641709   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.643775   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.643797   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.643807   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.643819   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.643836   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.643845   29716 round_trippers.go:580]     Audit-Id: 0f199bbc-55cc-489a-9e8a-6e81a9313b32
	I1024 19:25:18.643854   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.643863   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.644270   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"290","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1024 19:25:18.644662   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:18.644678   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.644689   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.644698   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.646585   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:18.646605   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.646615   29716 round_trippers.go:580]     Audit-Id: bd1bf818-3780-4287-8bb6-abaed2b23c6b
	I1024 19:25:18.646624   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.646637   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.646645   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.646657   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.646664   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.646820   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:18.647168   29716 pod_ready.go:92] pod "etcd-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:18.647184   29716 pod_ready.go:81] duration metric: took 5.556055ms waiting for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.647198   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.647254   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:25:18.647270   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.647281   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.647292   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.649437   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.649453   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.649462   29716 round_trippers.go:580]     Audit-Id: 470ef0c6-75f6-471a-b45c-2d6ece41c235
	I1024 19:25:18.649473   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.649478   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.649486   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.649491   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.649499   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.649649   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"292","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1024 19:25:18.650086   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:18.650106   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.650114   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.650121   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.651761   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:18.651775   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.651782   29716 round_trippers.go:580]     Audit-Id: 28474f30-f048-4051-88ed-3ef0c058fb45
	I1024 19:25:18.651787   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.651793   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.651805   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.651814   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.651824   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.652142   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:18.652460   29716 pod_ready.go:92] pod "kube-apiserver-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:18.652476   29716 pod_ready.go:81] duration metric: took 5.26709ms waiting for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.652484   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.652527   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-632589
	I1024 19:25:18.652534   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.652540   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.652546   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.654380   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:18.654397   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.654407   29716 round_trippers.go:580]     Audit-Id: abd6be2b-c64b-4e78-b19b-1d459f4bf000
	I1024 19:25:18.654415   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.654425   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.654434   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.654443   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.654449   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.654587   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-632589","namespace":"kube-system","uid":"6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6","resourceVersion":"297","creationTimestamp":"2023-10-24T19:24:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.mirror":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.seen":"2023-10-24T19:24:47.530352200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1024 19:25:18.698228   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:18.698266   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.698279   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.698289   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.701060   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.701074   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.701082   29716 round_trippers.go:580]     Audit-Id: 5d6442ce-a231-4a26-928f-faeb579be5b1
	I1024 19:25:18.701090   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.701098   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.701106   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.701115   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.701126   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.701392   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:18.701665   29716 pod_ready.go:92] pod "kube-controller-manager-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:18.701678   29716 pod_ready.go:81] duration metric: took 49.184691ms waiting for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.701689   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:18.898094   29716 request.go:629] Waited for 196.344127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:25:18.898141   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:25:18.898147   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:18.898154   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:18.898168   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:18.901077   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:18.901093   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:18.901099   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:18.901104   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:18.901109   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:18 GMT
	I1024 19:25:18.901115   29716 round_trippers.go:580]     Audit-Id: d42e420a-b078-4e75-99cb-d1e471758c9d
	I1024 19:25:18.901119   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:18.901124   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:18.901532   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gd49s","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1c573fd-3f4b-4d90-a366-6d859a121185","resourceVersion":"408","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:25:19.098293   29716 request.go:629] Waited for 196.382734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:19.098360   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:19.098365   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:19.098372   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:19.098378   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:19.100928   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:19.100949   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:19.100957   29716 round_trippers.go:580]     Audit-Id: 70f6e9a5-d182-44ea-b950-343a1c0ae1c0
	I1024 19:25:19.100965   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:19.100974   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:19.100982   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:19.100992   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:19.101007   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:19 GMT
	I1024 19:25:19.101327   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:19.101615   29716 pod_ready.go:92] pod "kube-proxy-gd49s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:19.101628   29716 pod_ready.go:81] duration metric: took 399.933241ms waiting for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:19.101637   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:19.298062   29716 request.go:629] Waited for 196.353785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:25:19.298121   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:25:19.298126   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:19.298133   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:19.298140   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:19.300552   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:19.300570   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:19.300579   29716 round_trippers.go:580]     Audit-Id: 418a4b0a-8c4d-413e-aa14-8ae0378cb048
	I1024 19:25:19.300587   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:19.300595   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:19.300603   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:19.300611   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:19.300619   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:19 GMT
	I1024 19:25:19.300835   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-632589","namespace":"kube-system","uid":"e85a7c19-1a25-42f5-81bd-16ed7070ca3c","resourceVersion":"294","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.mirror":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.seen":"2023-10-24T19:24:56.213306721Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1024 19:25:19.497495   29716 request.go:629] Waited for 196.312017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:19.497593   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:19.497602   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:19.497609   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:19.497615   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:19.500411   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:19.500429   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:19.500435   29716 round_trippers.go:580]     Audit-Id: 1fd207dc-4d0a-49ae-8265-77f9735d453a
	I1024 19:25:19.500441   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:19.500446   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:19.500451   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:19.500456   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:19.500464   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:19 GMT
	I1024 19:25:19.500819   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:19.501104   29716 pod_ready.go:92] pod "kube-scheduler-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:19.501116   29716 pod_ready.go:81] duration metric: took 399.474309ms waiting for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:19.501126   29716 pod_ready.go:38] duration metric: took 2.40013018s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:19.501141   29716 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:25:19.501185   29716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:25:19.513566   29716 command_runner.go:130] > 1083
	I1024 19:25:19.513664   29716 api_server.go:72] duration metric: took 9.659851356s to wait for apiserver process to appear ...
	I1024 19:25:19.513678   29716 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:25:19.513692   29716 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:25:19.518209   29716 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I1024 19:25:19.518272   29716 round_trippers.go:463] GET https://192.168.39.247:8443/version
	I1024 19:25:19.518282   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:19.518294   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:19.518307   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:19.519567   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:19.519587   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:19.519597   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:19.519605   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:19.519613   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:19.519624   29716 round_trippers.go:580]     Content-Length: 264
	I1024 19:25:19.519632   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:19 GMT
	I1024 19:25:19.519647   29716 round_trippers.go:580]     Audit-Id: a519337b-d0c4-4962-99a1-ecb092eb258e
	I1024 19:25:19.519663   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:19.519683   29716 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1024 19:25:19.519762   29716 api_server.go:141] control plane version: v1.28.3
	I1024 19:25:19.519777   29716 api_server.go:131] duration metric: took 6.093084ms to wait for apiserver health ...
	I1024 19:25:19.519784   29716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:25:19.698219   29716 request.go:629] Waited for 178.366573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:25:19.698293   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:25:19.698299   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:19.698306   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:19.698312   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:19.702191   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:19.702216   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:19.702227   29716 round_trippers.go:580]     Audit-Id: 3be32c7d-cb83-48e1-b3b4-2bc3f806af80
	I1024 19:25:19.702235   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:19.702243   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:19.702250   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:19.702257   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:19.702264   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:19 GMT
	I1024 19:25:19.702941   29716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"436","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1024 19:25:19.704579   29716 system_pods.go:59] 8 kube-system pods found
	I1024 19:25:19.704601   29716 system_pods.go:61] "coredns-5dd5756b68-c5l8s" [20aa782d-e6ed-45ad-b625-556d1a8503c0] Running
	I1024 19:25:19.704606   29716 system_pods.go:61] "etcd-multinode-632589" [a84a9833-e3b8-4148-9ee7-3f4479a10186] Running
	I1024 19:25:19.704610   29716 system_pods.go:61] "kindnet-xh444" [dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b] Running
	I1024 19:25:19.704614   29716 system_pods.go:61] "kube-apiserver-multinode-632589" [34fcbf72-bf92-477f-8c1c-b0fd908c561d] Running
	I1024 19:25:19.704618   29716 system_pods.go:61] "kube-controller-manager-multinode-632589" [6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6] Running
	I1024 19:25:19.704622   29716 system_pods.go:61] "kube-proxy-gd49s" [a1c573fd-3f4b-4d90-a366-6d859a121185] Running
	I1024 19:25:19.704625   29716 system_pods.go:61] "kube-scheduler-multinode-632589" [e85a7c19-1a25-42f5-81bd-16ed7070ca3c] Running
	I1024 19:25:19.704629   29716 system_pods.go:61] "storage-provisioner" [4023756b-6e38-476d-8dec-90ea2346dc01] Running
	I1024 19:25:19.704635   29716 system_pods.go:74] duration metric: took 184.846143ms to wait for pod list to return data ...
	I1024 19:25:19.704644   29716 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:25:19.897784   29716 request.go:629] Waited for 193.08227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:25:19.897843   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:25:19.897850   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:19.897859   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:19.897870   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:19.900583   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:19.900605   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:19.900614   29716 round_trippers.go:580]     Audit-Id: 4945d406-fa98-412e-b9ab-285a1f4b81eb
	I1024 19:25:19.900622   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:19.900630   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:19.900638   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:19.900648   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:19.900659   29716 round_trippers.go:580]     Content-Length: 261
	I1024 19:25:19.900667   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:19 GMT
	I1024 19:25:19.900701   29716 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"44688757-fcd3-49d1-a7b3-5cd59b15336d","resourceVersion":"346","creationTimestamp":"2023-10-24T19:25:09Z"}}]}
	I1024 19:25:19.900892   29716 default_sa.go:45] found service account: "default"
	I1024 19:25:19.900910   29716 default_sa.go:55] duration metric: took 196.260361ms for default service account to be created ...
	I1024 19:25:19.900917   29716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:25:20.098339   29716 request.go:629] Waited for 197.365573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:25:20.098403   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:25:20.098411   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:20.098421   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:20.098431   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:20.102541   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:20.102563   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:20.102573   29716 round_trippers.go:580]     Audit-Id: f76e4be9-0fc1-4394-969d-c13114c6d0a8
	I1024 19:25:20.102582   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:20.102590   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:20.102599   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:20.102607   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:20.102615   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:20 GMT
	I1024 19:25:20.103413   29716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"436","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1024 19:25:20.105036   29716 system_pods.go:86] 8 kube-system pods found
	I1024 19:25:20.105070   29716 system_pods.go:89] "coredns-5dd5756b68-c5l8s" [20aa782d-e6ed-45ad-b625-556d1a8503c0] Running
	I1024 19:25:20.105078   29716 system_pods.go:89] "etcd-multinode-632589" [a84a9833-e3b8-4148-9ee7-3f4479a10186] Running
	I1024 19:25:20.105084   29716 system_pods.go:89] "kindnet-xh444" [dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b] Running
	I1024 19:25:20.105091   29716 system_pods.go:89] "kube-apiserver-multinode-632589" [34fcbf72-bf92-477f-8c1c-b0fd908c561d] Running
	I1024 19:25:20.105099   29716 system_pods.go:89] "kube-controller-manager-multinode-632589" [6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6] Running
	I1024 19:25:20.105106   29716 system_pods.go:89] "kube-proxy-gd49s" [a1c573fd-3f4b-4d90-a366-6d859a121185] Running
	I1024 19:25:20.105115   29716 system_pods.go:89] "kube-scheduler-multinode-632589" [e85a7c19-1a25-42f5-81bd-16ed7070ca3c] Running
	I1024 19:25:20.105122   29716 system_pods.go:89] "storage-provisioner" [4023756b-6e38-476d-8dec-90ea2346dc01] Running
	I1024 19:25:20.105132   29716 system_pods.go:126] duration metric: took 204.20864ms to wait for k8s-apps to be running ...
	I1024 19:25:20.105145   29716 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:25:20.105190   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:25:20.117693   29716 system_svc.go:56] duration metric: took 12.541393ms WaitForService to wait for kubelet.
	I1024 19:25:20.117714   29716 kubeadm.go:581] duration metric: took 10.26390421s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:25:20.117734   29716 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:25:20.298187   29716 request.go:629] Waited for 180.360676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes
	I1024 19:25:20.298250   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes
	I1024 19:25:20.298257   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:20.298268   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:20.298284   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:20.302387   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:20.302409   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:20.302417   29716 round_trippers.go:580]     Audit-Id: 075d6d9f-bb8b-46ab-84da-63f86f4d4039
	I1024 19:25:20.302423   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:20.302428   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:20.302433   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:20.302438   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:20.302451   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:20 GMT
	I1024 19:25:20.303276   29716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1024 19:25:20.303599   29716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:25:20.303616   29716 node_conditions.go:123] node cpu capacity is 2
	I1024 19:25:20.303628   29716 node_conditions.go:105] duration metric: took 185.88874ms to run NodePressure ...
	I1024 19:25:20.303650   29716 start.go:228] waiting for startup goroutines ...
	I1024 19:25:20.303659   29716 start.go:233] waiting for cluster config update ...
	I1024 19:25:20.303667   29716 start.go:242] writing updated cluster config ...
	I1024 19:25:20.305937   29716 out.go:177] 
	I1024 19:25:20.308679   29716 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:25:20.308740   29716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:25:20.310509   29716 out.go:177] * Starting worker node multinode-632589-m02 in cluster multinode-632589
	I1024 19:25:20.311801   29716 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:25:20.311817   29716 cache.go:57] Caching tarball of preloaded images
	I1024 19:25:20.311907   29716 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:25:20.311919   29716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:25:20.311977   29716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:25:20.312114   29716 start.go:365] acquiring machines lock for multinode-632589-m02: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:25:20.312150   29716 start.go:369] acquired machines lock for "multinode-632589-m02" in 19.522µs
	I1024 19:25:20.312166   29716 start.go:93] Provisioning new machine with config: &{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:25:20.312226   29716 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1024 19:25:20.313852   29716 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1024 19:25:20.313913   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:20.313936   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:20.327673   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
	I1024 19:25:20.328020   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:20.328446   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:20.328465   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:20.328756   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:20.328926   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:25:20.329106   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:20.329275   29716 start.go:159] libmachine.API.Create for "multinode-632589" (driver="kvm2")
	I1024 19:25:20.329318   29716 client.go:168] LocalClient.Create starting
	I1024 19:25:20.329353   29716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem
	I1024 19:25:20.329384   29716 main.go:141] libmachine: Decoding PEM data...
	I1024 19:25:20.329406   29716 main.go:141] libmachine: Parsing certificate...
	I1024 19:25:20.329470   29716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem
	I1024 19:25:20.329504   29716 main.go:141] libmachine: Decoding PEM data...
	I1024 19:25:20.329525   29716 main.go:141] libmachine: Parsing certificate...
	I1024 19:25:20.329551   29716 main.go:141] libmachine: Running pre-create checks...
	I1024 19:25:20.329564   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .PreCreateCheck
	I1024 19:25:20.329756   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetConfigRaw
	I1024 19:25:20.330118   29716 main.go:141] libmachine: Creating machine...
	I1024 19:25:20.330132   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .Create
	I1024 19:25:20.330242   29716 main.go:141] libmachine: (multinode-632589-m02) Creating KVM machine...
	I1024 19:25:20.331352   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found existing default KVM network
	I1024 19:25:20.331493   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found existing private KVM network mk-multinode-632589
	I1024 19:25:20.331616   29716 main.go:141] libmachine: (multinode-632589-m02) Setting up store path in /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02 ...
	I1024 19:25:20.331642   29716 main.go:141] libmachine: (multinode-632589-m02) Building disk image from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:25:20.331714   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:20.331620   30116 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:25:20.331794   29716 main.go:141] libmachine: (multinode-632589-m02) Downloading /home/jenkins/minikube-integration/17485-9023/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso...
	I1024 19:25:20.529360   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:20.529229   30116 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa...
	I1024 19:25:20.760504   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:20.760379   30116 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/multinode-632589-m02.rawdisk...
	I1024 19:25:20.760529   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Writing magic tar header
	I1024 19:25:20.760541   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Writing SSH key tar header
	I1024 19:25:20.760550   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:20.760502   30116 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02 ...
	I1024 19:25:20.760632   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02
	I1024 19:25:20.760650   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines
	I1024 19:25:20.760660   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:25:20.760671   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023
	I1024 19:25:20.760685   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1024 19:25:20.760700   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home/jenkins
	I1024 19:25:20.760715   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Checking permissions on dir: /home
	I1024 19:25:20.760721   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Skipping /home - not owner
	I1024 19:25:20.760736   29716 main.go:141] libmachine: (multinode-632589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02 (perms=drwx------)
	I1024 19:25:20.760746   29716 main.go:141] libmachine: (multinode-632589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines (perms=drwxr-xr-x)
	I1024 19:25:20.760758   29716 main.go:141] libmachine: (multinode-632589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube (perms=drwxr-xr-x)
	I1024 19:25:20.760774   29716 main.go:141] libmachine: (multinode-632589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023 (perms=drwxrwxr-x)
	I1024 19:25:20.760792   29716 main.go:141] libmachine: (multinode-632589-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1024 19:25:20.760807   29716 main.go:141] libmachine: (multinode-632589-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1024 19:25:20.760822   29716 main.go:141] libmachine: (multinode-632589-m02) Creating domain...
	I1024 19:25:20.761659   29716 main.go:141] libmachine: (multinode-632589-m02) define libvirt domain using xml: 
	I1024 19:25:20.761679   29716 main.go:141] libmachine: (multinode-632589-m02) <domain type='kvm'>
	I1024 19:25:20.761690   29716 main.go:141] libmachine: (multinode-632589-m02)   <name>multinode-632589-m02</name>
	I1024 19:25:20.761699   29716 main.go:141] libmachine: (multinode-632589-m02)   <memory unit='MiB'>2200</memory>
	I1024 19:25:20.761713   29716 main.go:141] libmachine: (multinode-632589-m02)   <vcpu>2</vcpu>
	I1024 19:25:20.761725   29716 main.go:141] libmachine: (multinode-632589-m02)   <features>
	I1024 19:25:20.761734   29716 main.go:141] libmachine: (multinode-632589-m02)     <acpi/>
	I1024 19:25:20.761739   29716 main.go:141] libmachine: (multinode-632589-m02)     <apic/>
	I1024 19:25:20.761746   29716 main.go:141] libmachine: (multinode-632589-m02)     <pae/>
	I1024 19:25:20.761753   29716 main.go:141] libmachine: (multinode-632589-m02)     
	I1024 19:25:20.761759   29716 main.go:141] libmachine: (multinode-632589-m02)   </features>
	I1024 19:25:20.761767   29716 main.go:141] libmachine: (multinode-632589-m02)   <cpu mode='host-passthrough'>
	I1024 19:25:20.761789   29716 main.go:141] libmachine: (multinode-632589-m02)   
	I1024 19:25:20.761812   29716 main.go:141] libmachine: (multinode-632589-m02)   </cpu>
	I1024 19:25:20.761829   29716 main.go:141] libmachine: (multinode-632589-m02)   <os>
	I1024 19:25:20.761842   29716 main.go:141] libmachine: (multinode-632589-m02)     <type>hvm</type>
	I1024 19:25:20.761854   29716 main.go:141] libmachine: (multinode-632589-m02)     <boot dev='cdrom'/>
	I1024 19:25:20.761863   29716 main.go:141] libmachine: (multinode-632589-m02)     <boot dev='hd'/>
	I1024 19:25:20.761891   29716 main.go:141] libmachine: (multinode-632589-m02)     <bootmenu enable='no'/>
	I1024 19:25:20.761908   29716 main.go:141] libmachine: (multinode-632589-m02)   </os>
	I1024 19:25:20.761934   29716 main.go:141] libmachine: (multinode-632589-m02)   <devices>
	I1024 19:25:20.761952   29716 main.go:141] libmachine: (multinode-632589-m02)     <disk type='file' device='cdrom'>
	I1024 19:25:20.761966   29716 main.go:141] libmachine: (multinode-632589-m02)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/boot2docker.iso'/>
	I1024 19:25:20.761973   29716 main.go:141] libmachine: (multinode-632589-m02)       <target dev='hdc' bus='scsi'/>
	I1024 19:25:20.761979   29716 main.go:141] libmachine: (multinode-632589-m02)       <readonly/>
	I1024 19:25:20.761987   29716 main.go:141] libmachine: (multinode-632589-m02)     </disk>
	I1024 19:25:20.761998   29716 main.go:141] libmachine: (multinode-632589-m02)     <disk type='file' device='disk'>
	I1024 19:25:20.762011   29716 main.go:141] libmachine: (multinode-632589-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1024 19:25:20.762027   29716 main.go:141] libmachine: (multinode-632589-m02)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/multinode-632589-m02.rawdisk'/>
	I1024 19:25:20.762034   29716 main.go:141] libmachine: (multinode-632589-m02)       <target dev='hda' bus='virtio'/>
	I1024 19:25:20.762043   29716 main.go:141] libmachine: (multinode-632589-m02)     </disk>
	I1024 19:25:20.762049   29716 main.go:141] libmachine: (multinode-632589-m02)     <interface type='network'>
	I1024 19:25:20.762058   29716 main.go:141] libmachine: (multinode-632589-m02)       <source network='mk-multinode-632589'/>
	I1024 19:25:20.762064   29716 main.go:141] libmachine: (multinode-632589-m02)       <model type='virtio'/>
	I1024 19:25:20.762071   29716 main.go:141] libmachine: (multinode-632589-m02)     </interface>
	I1024 19:25:20.762082   29716 main.go:141] libmachine: (multinode-632589-m02)     <interface type='network'>
	I1024 19:25:20.762097   29716 main.go:141] libmachine: (multinode-632589-m02)       <source network='default'/>
	I1024 19:25:20.762110   29716 main.go:141] libmachine: (multinode-632589-m02)       <model type='virtio'/>
	I1024 19:25:20.762121   29716 main.go:141] libmachine: (multinode-632589-m02)     </interface>
	I1024 19:25:20.762129   29716 main.go:141] libmachine: (multinode-632589-m02)     <serial type='pty'>
	I1024 19:25:20.762136   29716 main.go:141] libmachine: (multinode-632589-m02)       <target port='0'/>
	I1024 19:25:20.762143   29716 main.go:141] libmachine: (multinode-632589-m02)     </serial>
	I1024 19:25:20.762149   29716 main.go:141] libmachine: (multinode-632589-m02)     <console type='pty'>
	I1024 19:25:20.762157   29716 main.go:141] libmachine: (multinode-632589-m02)       <target type='serial' port='0'/>
	I1024 19:25:20.762169   29716 main.go:141] libmachine: (multinode-632589-m02)     </console>
	I1024 19:25:20.762182   29716 main.go:141] libmachine: (multinode-632589-m02)     <rng model='virtio'>
	I1024 19:25:20.762197   29716 main.go:141] libmachine: (multinode-632589-m02)       <backend model='random'>/dev/random</backend>
	I1024 19:25:20.762210   29716 main.go:141] libmachine: (multinode-632589-m02)     </rng>
	I1024 19:25:20.762221   29716 main.go:141] libmachine: (multinode-632589-m02)     
	I1024 19:25:20.762230   29716 main.go:141] libmachine: (multinode-632589-m02)     
	I1024 19:25:20.762236   29716 main.go:141] libmachine: (multinode-632589-m02)   </devices>
	I1024 19:25:20.762244   29716 main.go:141] libmachine: (multinode-632589-m02) </domain>
	I1024 19:25:20.762255   29716 main.go:141] libmachine: (multinode-632589-m02) 
	I1024 19:25:20.768983   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:8d:3f:8e in network default
	I1024 19:25:20.769570   29716 main.go:141] libmachine: (multinode-632589-m02) Ensuring networks are active...
	I1024 19:25:20.769592   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:20.770237   29716 main.go:141] libmachine: (multinode-632589-m02) Ensuring network default is active
	I1024 19:25:20.770683   29716 main.go:141] libmachine: (multinode-632589-m02) Ensuring network mk-multinode-632589 is active
	I1024 19:25:20.771002   29716 main.go:141] libmachine: (multinode-632589-m02) Getting domain xml...
	I1024 19:25:20.771585   29716 main.go:141] libmachine: (multinode-632589-m02) Creating domain...
	I1024 19:25:22.014506   29716 main.go:141] libmachine: (multinode-632589-m02) Waiting to get IP...
	I1024 19:25:22.015245   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:22.015645   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:22.015679   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:22.015627   30116 retry.go:31] will retry after 195.056222ms: waiting for machine to come up
	I1024 19:25:22.212042   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:22.212480   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:22.212507   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:22.212449   30116 retry.go:31] will retry after 290.28977ms: waiting for machine to come up
	I1024 19:25:22.503796   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:22.504188   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:22.504215   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:22.504172   30116 retry.go:31] will retry after 302.794069ms: waiting for machine to come up
	I1024 19:25:22.808693   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:22.809232   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:22.809261   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:22.809181   30116 retry.go:31] will retry after 566.987643ms: waiting for machine to come up
	I1024 19:25:23.377902   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:23.378321   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:23.378341   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:23.378276   30116 retry.go:31] will retry after 676.322151ms: waiting for machine to come up
	I1024 19:25:24.056075   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:24.056520   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:24.056553   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:24.056473   30116 retry.go:31] will retry after 641.59911ms: waiting for machine to come up
	I1024 19:25:24.699109   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:24.699608   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:24.699636   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:24.699505   30116 retry.go:31] will retry after 1.022778385s: waiting for machine to come up
	I1024 19:25:25.723970   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:25.724390   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:25.724426   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:25.724347   30116 retry.go:31] will retry after 1.073451104s: waiting for machine to come up
	I1024 19:25:26.799433   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:26.799862   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:26.799882   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:26.799819   30116 retry.go:31] will retry after 1.557195535s: waiting for machine to come up
	I1024 19:25:28.359581   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:28.359995   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:28.360021   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:28.359947   30116 retry.go:31] will retry after 2.166949109s: waiting for machine to come up
	I1024 19:25:30.528100   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:30.528615   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:30.528642   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:30.528566   30116 retry.go:31] will retry after 2.853808345s: waiting for machine to come up
	I1024 19:25:33.383715   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:33.384142   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:33.384163   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:33.384098   30116 retry.go:31] will retry after 3.079026698s: waiting for machine to come up
	I1024 19:25:36.464626   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:36.465020   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:36.465048   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:36.464958   30116 retry.go:31] will retry after 3.63187322s: waiting for machine to come up
	I1024 19:25:40.099907   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:40.100332   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find current IP address of domain multinode-632589-m02 in network mk-multinode-632589
	I1024 19:25:40.100357   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | I1024 19:25:40.100292   30116 retry.go:31] will retry after 3.431362894s: waiting for machine to come up
	I1024 19:25:43.535077   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.535415   29716 main.go:141] libmachine: (multinode-632589-m02) Found IP for machine: 192.168.39.186
	I1024 19:25:43.535447   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has current primary IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.535459   29716 main.go:141] libmachine: (multinode-632589-m02) Reserving static IP address...
	I1024 19:25:43.535898   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | unable to find host DHCP lease matching {name: "multinode-632589-m02", mac: "52:54:00:ae:ed:9b", ip: "192.168.39.186"} in network mk-multinode-632589
	I1024 19:25:43.609237   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Getting to WaitForSSH function...
	I1024 19:25:43.609268   29716 main.go:141] libmachine: (multinode-632589-m02) Reserved static IP address: 192.168.39.186
	I1024 19:25:43.609283   29716 main.go:141] libmachine: (multinode-632589-m02) Waiting for SSH to be available...
	I1024 19:25:43.611658   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.611981   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:43.612015   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.612143   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Using SSH client type: external
	I1024 19:25:43.612174   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa (-rw-------)
	I1024 19:25:43.612208   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:25:43.612230   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | About to run SSH command:
	I1024 19:25:43.612246   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | exit 0
	I1024 19:25:43.709072   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | SSH cmd err, output: <nil>: 
	I1024 19:25:43.709340   29716 main.go:141] libmachine: (multinode-632589-m02) KVM machine creation complete!
	I1024 19:25:43.709651   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetConfigRaw
	I1024 19:25:43.710068   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:43.710240   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:43.710367   29716 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1024 19:25:43.710380   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetState
	I1024 19:25:43.711556   29716 main.go:141] libmachine: Detecting operating system of created instance...
	I1024 19:25:43.711573   29716 main.go:141] libmachine: Waiting for SSH to be available...
	I1024 19:25:43.711579   29716 main.go:141] libmachine: Getting to WaitForSSH function...
	I1024 19:25:43.711585   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:43.713781   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.714116   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:43.714133   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.714255   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:43.714420   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:43.714557   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:43.714722   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:43.714863   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:25:43.715259   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:25:43.715273   29716 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1024 19:25:43.840247   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:25:43.840265   29716 main.go:141] libmachine: Detecting the provisioner...
	I1024 19:25:43.840273   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:43.843093   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.843431   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:43.843455   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.843648   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:43.843848   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:43.843993   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:43.844128   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:43.844281   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:25:43.844605   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:25:43.844618   29716 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1024 19:25:43.974179   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g71212f5-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1024 19:25:43.974235   29716 main.go:141] libmachine: found compatible host: buildroot
	I1024 19:25:43.974242   29716 main.go:141] libmachine: Provisioning with buildroot...
	I1024 19:25:43.974258   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:25:43.974530   29716 buildroot.go:166] provisioning hostname "multinode-632589-m02"
	I1024 19:25:43.974551   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:25:43.974728   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:43.977234   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.977680   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:43.977715   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:43.977907   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:43.978132   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:43.978264   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:43.978403   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:43.978541   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:25:43.978845   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:25:43.978859   29716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-632589-m02 && echo "multinode-632589-m02" | sudo tee /etc/hostname
	I1024 19:25:44.118187   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-632589-m02
	
	I1024 19:25:44.118210   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:44.120970   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.121385   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:44.121415   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.121619   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:44.121813   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:44.121958   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:44.122092   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:44.122242   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:25:44.122557   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:25:44.122575   29716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-632589-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-632589-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-632589-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:25:44.260568   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:25:44.260594   29716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:25:44.260624   29716 buildroot.go:174] setting up certificates
	I1024 19:25:44.260634   29716 provision.go:83] configureAuth start
	I1024 19:25:44.260649   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:25:44.260907   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:25:44.263465   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.263813   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:44.263848   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.264012   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:44.266191   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.266526   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:44.266561   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.266675   29716 provision.go:138] copyHostCerts
	I1024 19:25:44.266697   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:25:44.266722   29716 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:25:44.266739   29716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:25:44.266798   29716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:25:44.266876   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:25:44.266892   29716 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:25:44.266899   29716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:25:44.266923   29716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:25:44.266979   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:25:44.266995   29716 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:25:44.267003   29716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:25:44.267023   29716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:25:44.267074   29716 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.multinode-632589-m02 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube multinode-632589-m02]
	I1024 19:25:44.534406   29716 provision.go:172] copyRemoteCerts
	I1024 19:25:44.534453   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:25:44.534474   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:44.537182   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.537552   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:44.537587   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.537716   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:44.537913   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:44.538162   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:44.538352   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:25:44.631507   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:25:44.631579   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:25:44.655537   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:25:44.655598   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1024 19:25:44.680839   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:25:44.680909   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:25:44.705957   29716 provision.go:86] duration metric: configureAuth took 445.307986ms
	I1024 19:25:44.705983   29716 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:25:44.706179   29716 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:25:44.706255   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:44.708867   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.709252   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:44.709287   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:44.709453   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:44.709686   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:44.709846   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:44.710029   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:44.710220   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:25:44.710559   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:25:44.710581   29716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:25:45.039854   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:25:45.039880   29716 main.go:141] libmachine: Checking connection to Docker...
	I1024 19:25:45.039892   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetURL
	I1024 19:25:45.041070   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | Using libvirt version 6000000
	I1024 19:25:45.043361   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.043750   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.043780   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.044006   29716 main.go:141] libmachine: Docker is up and running!
	I1024 19:25:45.044021   29716 main.go:141] libmachine: Reticulating splines...
	I1024 19:25:45.044027   29716 client.go:171] LocalClient.Create took 24.714698514s
	I1024 19:25:45.044045   29716 start.go:167] duration metric: libmachine.API.Create for "multinode-632589" took 24.714773447s
	I1024 19:25:45.044055   29716 start.go:300] post-start starting for "multinode-632589-m02" (driver="kvm2")
	I1024 19:25:45.044063   29716 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:25:45.044080   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:45.044338   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:25:45.044375   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:45.046619   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.046917   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.046950   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.047085   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:45.047271   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:45.047440   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:45.047581   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:25:45.138224   29716 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:25:45.142478   29716 command_runner.go:130] > NAME=Buildroot
	I1024 19:25:45.142496   29716 command_runner.go:130] > VERSION=2021.02.12-1-g71212f5-dirty
	I1024 19:25:45.142503   29716 command_runner.go:130] > ID=buildroot
	I1024 19:25:45.142512   29716 command_runner.go:130] > VERSION_ID=2021.02.12
	I1024 19:25:45.142518   29716 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1024 19:25:45.142541   29716 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:25:45.142553   29716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:25:45.142601   29716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:25:45.142669   29716 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:25:45.142678   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /etc/ssl/certs/162982.pem
	I1024 19:25:45.142750   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:25:45.150533   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:25:45.173929   29716 start.go:303] post-start completed in 129.860817ms
	I1024 19:25:45.173968   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetConfigRaw
	I1024 19:25:45.174541   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:25:45.177084   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.177440   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.177464   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.177689   29716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:25:45.177875   29716 start.go:128] duration metric: createHost completed in 24.865639973s
	I1024 19:25:45.177897   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:45.180015   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.180372   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.180399   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.180596   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:45.180797   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:45.180963   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:45.181100   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:45.181291   29716 main.go:141] libmachine: Using SSH client type: native
	I1024 19:25:45.181626   29716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:25:45.181642   29716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:25:45.310153   29716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698175545.279049614
	
	I1024 19:25:45.310173   29716 fix.go:206] guest clock: 1698175545.279049614
	I1024 19:25:45.310180   29716 fix.go:219] Guest: 2023-10-24 19:25:45.279049614 +0000 UTC Remote: 2023-10-24 19:25:45.177887542 +0000 UTC m=+93.449340694 (delta=101.162072ms)
	I1024 19:25:45.310198   29716 fix.go:190] guest clock delta is within tolerance: 101.162072ms
	I1024 19:25:45.310205   29716 start.go:83] releasing machines lock for "multinode-632589-m02", held for 24.998046163s
	I1024 19:25:45.310227   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:45.310489   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:25:45.313153   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.313488   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.313521   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.315784   29716 out.go:177] * Found network options:
	I1024 19:25:45.317393   29716 out.go:177]   - NO_PROXY=192.168.39.247
	W1024 19:25:45.318837   29716 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:25:45.318881   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:45.319372   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:45.319570   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:25:45.319652   29716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:25:45.319692   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	W1024 19:25:45.319965   29716 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:25:45.320036   29716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:25:45.320058   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:25:45.322608   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.322967   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.322998   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.323033   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.323092   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:45.323265   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:45.323393   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:45.323405   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:45.323407   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:45.323523   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:25:45.323594   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:25:45.323748   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:25:45.323866   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:25:45.323985   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:25:45.563182   29716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:25:45.563215   29716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:25:45.569905   29716 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1024 19:25:45.570003   29716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:25:45.570076   29716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:25:45.584748   29716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1024 19:25:45.584968   29716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:25:45.584988   29716 start.go:472] detecting cgroup driver to use...
	I1024 19:25:45.585049   29716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:25:45.599407   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:25:45.612004   29716 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:25:45.612066   29716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:25:45.624810   29716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:25:45.637290   29716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:25:45.650890   29716 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1024 19:25:45.739600   29716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:25:45.858373   29716 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 19:25:45.858483   29716 docker.go:214] disabling docker service ...
	I1024 19:25:45.858535   29716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:25:45.872493   29716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:25:45.885086   29716 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1024 19:25:45.885162   29716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:25:46.011208   29716 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 19:25:46.011275   29716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:25:46.023113   29716 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1024 19:25:46.023526   29716 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 19:25:46.116205   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:25:46.127764   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:25:46.144658   29716 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:25:46.145090   29716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:25:46.145142   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:25:46.153842   29716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:25:46.153907   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:25:46.162544   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:25:46.171274   29716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:25:46.181175   29716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:25:46.190401   29716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:25:46.197930   29716 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:25:46.198130   29716 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:25:46.198175   29716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:25:46.210380   29716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:25:46.218274   29716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:25:46.325011   29716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:25:46.487980   29716 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:25:46.488051   29716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:25:46.492987   29716 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:25:46.493002   29716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:25:46.493009   29716 command_runner.go:130] > Device: 16h/22d	Inode: 742         Links: 1
	I1024 19:25:46.493017   29716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:25:46.493024   29716 command_runner.go:130] > Access: 2023-10-24 19:25:46.444888046 +0000
	I1024 19:25:46.493033   29716 command_runner.go:130] > Modify: 2023-10-24 19:25:46.444888046 +0000
	I1024 19:25:46.493042   29716 command_runner.go:130] > Change: 2023-10-24 19:25:46.444888046 +0000
	I1024 19:25:46.493052   29716 command_runner.go:130] >  Birth: -
	I1024 19:25:46.493389   29716 start.go:540] Will wait 60s for crictl version
	I1024 19:25:46.493439   29716 ssh_runner.go:195] Run: which crictl
	I1024 19:25:46.497410   29716 command_runner.go:130] > /usr/bin/crictl
	I1024 19:25:46.497536   29716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:25:46.531259   29716 command_runner.go:130] > Version:  0.1.0
	I1024 19:25:46.531279   29716 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:25:46.531287   29716 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1024 19:25:46.531294   29716 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:25:46.531312   29716 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:25:46.531364   29716 ssh_runner.go:195] Run: crio --version
	I1024 19:25:46.578295   29716 command_runner.go:130] > crio version 1.24.1
	I1024 19:25:46.578315   29716 command_runner.go:130] > Version:          1.24.1
	I1024 19:25:46.578325   29716 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:25:46.578333   29716 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:25:46.578341   29716 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:25:46.578348   29716 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:25:46.578353   29716 command_runner.go:130] > Compiler:         gc
	I1024 19:25:46.578358   29716 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:25:46.578369   29716 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:25:46.578378   29716 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:25:46.578385   29716 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:25:46.578391   29716 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:25:46.578550   29716 ssh_runner.go:195] Run: crio --version
	I1024 19:25:46.624626   29716 command_runner.go:130] > crio version 1.24.1
	I1024 19:25:46.624648   29716 command_runner.go:130] > Version:          1.24.1
	I1024 19:25:46.624659   29716 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:25:46.624665   29716 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:25:46.624673   29716 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:25:46.624681   29716 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:25:46.624686   29716 command_runner.go:130] > Compiler:         gc
	I1024 19:25:46.624699   29716 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:25:46.624710   29716 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:25:46.624731   29716 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:25:46.624746   29716 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:25:46.624754   29716 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:25:46.626996   29716 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:25:46.628601   29716 out.go:177]   - env NO_PROXY=192.168.39.247
	I1024 19:25:46.630016   29716 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:25:46.632527   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:46.632924   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:25:46.632953   29716 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:25:46.633119   29716 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:25:46.637238   29716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:25:46.649702   29716 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589 for IP: 192.168.39.186
	I1024 19:25:46.649736   29716 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:25:46.649875   29716 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:25:46.649919   29716 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:25:46.649935   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:25:46.649967   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:25:46.649980   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:25:46.649991   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:25:46.650040   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:25:46.650070   29716 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:25:46.650082   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:25:46.650102   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:25:46.650124   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:25:46.650145   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:25:46.650182   29716 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:25:46.650206   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:25:46.650220   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem -> /usr/share/ca-certificates/16298.pem
	I1024 19:25:46.650232   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /usr/share/ca-certificates/162982.pem
	I1024 19:25:46.650499   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:25:46.673374   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:25:46.695835   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:25:46.717700   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:25:46.739825   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:25:46.762849   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:25:46.786287   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:25:46.810098   29716 ssh_runner.go:195] Run: openssl version
	I1024 19:25:46.815389   29716 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1024 19:25:46.815743   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:25:46.825284   29716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:25:46.830186   29716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:25:46.830501   29716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:25:46.830548   29716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:25:46.835837   29716 command_runner.go:130] > b5213941
	I1024 19:25:46.836076   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:25:46.845598   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:25:46.854936   29716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:25:46.859394   29716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:25:46.859415   29716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:25:46.859455   29716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:25:46.864713   29716 command_runner.go:130] > 51391683
	I1024 19:25:46.864981   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:25:46.874157   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:25:46.884720   29716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:25:46.889145   29716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:25:46.889495   29716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:25:46.889549   29716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:25:46.895104   29716 command_runner.go:130] > 3ec20f2e
	I1024 19:25:46.895159   29716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:25:46.905189   29716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:25:46.909687   29716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:25:46.909718   29716 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:25:46.909800   29716 ssh_runner.go:195] Run: crio config
	I1024 19:25:46.981069   29716 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:25:46.981100   29716 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:25:46.981108   29716 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:25:46.981111   29716 command_runner.go:130] > #
	I1024 19:25:46.981118   29716 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:25:46.981127   29716 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:25:46.981133   29716 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:25:46.981140   29716 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:25:46.981144   29716 command_runner.go:130] > # reload'.
	I1024 19:25:46.981149   29716 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:25:46.981157   29716 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:25:46.981163   29716 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:25:46.981169   29716 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:25:46.981179   29716 command_runner.go:130] > [crio]
	I1024 19:25:46.981188   29716 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:25:46.981205   29716 command_runner.go:130] > # containers images, in this directory.
	I1024 19:25:46.981213   29716 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1024 19:25:46.981237   29716 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:25:46.981250   29716 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1024 19:25:46.981261   29716 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:25:46.981275   29716 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:25:46.981283   29716 command_runner.go:130] > storage_driver = "overlay"
	I1024 19:25:46.981307   29716 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:25:46.981320   29716 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:25:46.981331   29716 command_runner.go:130] > storage_option = [
	I1024 19:25:46.981338   29716 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1024 19:25:46.981345   29716 command_runner.go:130] > ]
	I1024 19:25:46.981352   29716 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:25:46.981365   29716 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:25:46.981376   29716 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:25:46.981386   29716 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:25:46.981400   29716 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:25:46.981411   29716 command_runner.go:130] > # always happen on a node reboot
	I1024 19:25:46.981422   29716 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:25:46.981434   29716 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:25:46.981441   29716 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:25:46.981450   29716 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:25:46.981490   29716 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:25:46.981506   29716 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:25:46.981519   29716 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:25:46.981531   29716 command_runner.go:130] > # internal_wipe = true
	I1024 19:25:46.981540   29716 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:25:46.981553   29716 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:25:46.981564   29716 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:25:46.981575   29716 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:25:46.981589   29716 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:25:46.981601   29716 command_runner.go:130] > [crio.api]
	I1024 19:25:46.981612   29716 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:25:46.981619   29716 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:25:46.981629   29716 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:25:46.981640   29716 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:25:46.981651   29716 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:25:46.981663   29716 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:25:46.981670   29716 command_runner.go:130] > # stream_port = "0"
	I1024 19:25:46.981679   29716 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:25:46.981689   29716 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:25:46.981699   29716 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:25:46.981708   29716 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:25:46.981722   29716 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:25:46.981735   29716 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:25:46.981745   29716 command_runner.go:130] > # minutes.
	I1024 19:25:46.981752   29716 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:25:46.981764   29716 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:25:46.981777   29716 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:25:46.981786   29716 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:25:46.981797   29716 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:25:46.981809   29716 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:25:46.981818   29716 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:25:46.981833   29716 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:25:46.981845   29716 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:25:46.981857   29716 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1024 19:25:46.981873   29716 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:25:46.981887   29716 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1024 19:25:46.981905   29716 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:25:46.981914   29716 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:25:46.981921   29716 command_runner.go:130] > [crio.runtime]
	I1024 19:25:46.981931   29716 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:25:46.981940   29716 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:25:46.981952   29716 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:25:46.981962   29716 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:25:46.981972   29716 command_runner.go:130] > # default_ulimits = [
	I1024 19:25:46.981977   29716 command_runner.go:130] > # ]
	I1024 19:25:46.981996   29716 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:25:46.982006   29716 command_runner.go:130] > # no_pivot = false
	I1024 19:25:46.982016   29716 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:25:46.982030   29716 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:25:46.982041   29716 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:25:46.982054   29716 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:25:46.982064   29716 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:25:46.982073   29716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:25:46.982080   29716 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1024 19:25:46.982084   29716 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:25:46.982092   29716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:25:46.982117   29716 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:25:46.982131   29716 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:25:46.982140   29716 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:25:46.982154   29716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:25:46.982164   29716 command_runner.go:130] > conmon_env = [
	I1024 19:25:46.982177   29716 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1024 19:25:46.982184   29716 command_runner.go:130] > ]
	I1024 19:25:46.982195   29716 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:25:46.982206   29716 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:25:46.982219   29716 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:25:46.982229   29716 command_runner.go:130] > # default_env = [
	I1024 19:25:46.982239   29716 command_runner.go:130] > # ]
	I1024 19:25:46.982249   29716 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:25:46.982259   29716 command_runner.go:130] > # selinux = false
	I1024 19:25:46.982270   29716 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:25:46.982283   29716 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:25:46.982295   29716 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:25:46.982306   29716 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:25:46.982317   29716 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:25:46.982329   29716 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:25:46.982342   29716 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:25:46.982353   29716 command_runner.go:130] > # which might increase security.
	I1024 19:25:46.982359   29716 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1024 19:25:46.982372   29716 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:25:46.982385   29716 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:25:46.982401   29716 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:25:46.982415   29716 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:25:46.982427   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:25:46.982435   29716 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:25:46.982447   29716 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:25:46.982457   29716 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:25:46.982468   29716 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:25:46.982481   29716 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:25:46.982492   29716 command_runner.go:130] > # irqbalance daemon.
	I1024 19:25:46.982501   29716 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:25:46.982515   29716 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:25:46.982524   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:25:46.982534   29716 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:25:46.982547   29716 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:25:46.982557   29716 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:25:46.982566   29716 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:25:46.982578   29716 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:25:46.982589   29716 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:25:46.982603   29716 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:25:46.982610   29716 command_runner.go:130] > # will be added.
	I1024 19:25:46.982620   29716 command_runner.go:130] > # default_capabilities = [
	I1024 19:25:46.982627   29716 command_runner.go:130] > # 	"CHOWN",
	I1024 19:25:46.982638   29716 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:25:46.982648   29716 command_runner.go:130] > # 	"FSETID",
	I1024 19:25:46.982666   29716 command_runner.go:130] > # 	"FOWNER",
	I1024 19:25:46.982678   29716 command_runner.go:130] > # 	"SETGID",
	I1024 19:25:46.982685   29716 command_runner.go:130] > # 	"SETUID",
	I1024 19:25:46.982694   29716 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:25:46.982701   29716 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:25:46.982713   29716 command_runner.go:130] > # 	"KILL",
	I1024 19:25:46.982722   29716 command_runner.go:130] > # ]
	I1024 19:25:46.982733   29716 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:25:46.982746   29716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:25:46.982757   29716 command_runner.go:130] > # default_sysctls = [
	I1024 19:25:46.982766   29716 command_runner.go:130] > # ]
	I1024 19:25:46.982774   29716 command_runner.go:130] > # List of devices on the host that a
	I1024 19:25:46.982789   29716 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:25:46.982799   29716 command_runner.go:130] > # allowed_devices = [
	I1024 19:25:46.982805   29716 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:25:46.982814   29716 command_runner.go:130] > # ]
	I1024 19:25:46.982822   29716 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:25:46.982837   29716 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:25:46.982850   29716 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:25:46.982874   29716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:25:46.982882   29716 command_runner.go:130] > # additional_devices = [
	I1024 19:25:46.982891   29716 command_runner.go:130] > # ]
	I1024 19:25:46.982905   29716 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:25:46.982915   29716 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:25:46.982922   29716 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:25:46.982969   29716 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:25:46.982979   29716 command_runner.go:130] > # ]
	I1024 19:25:46.983006   29716 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:25:46.983020   29716 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:25:46.983031   29716 command_runner.go:130] > # Defaults to false.
	I1024 19:25:46.983041   29716 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:25:46.983054   29716 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:25:46.983067   29716 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:25:46.983072   29716 command_runner.go:130] > # hooks_dir = [
	I1024 19:25:46.983078   29716 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:25:46.983084   29716 command_runner.go:130] > # ]
	I1024 19:25:46.983094   29716 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:25:46.983109   29716 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:25:46.983121   29716 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:25:46.983127   29716 command_runner.go:130] > #
	I1024 19:25:46.983138   29716 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:25:46.983151   29716 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:25:46.983163   29716 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:25:46.983172   29716 command_runner.go:130] > #
	I1024 19:25:46.983182   29716 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:25:46.983196   29716 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:25:46.983209   29716 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:25:46.983223   29716 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:25:46.983231   29716 command_runner.go:130] > #
	I1024 19:25:46.983238   29716 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:25:46.983251   29716 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:25:46.983266   29716 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:25:46.983277   29716 command_runner.go:130] > pids_limit = 1024
	I1024 19:25:46.983288   29716 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:25:46.983298   29716 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:25:46.983308   29716 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:25:46.983325   29716 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:25:46.983335   29716 command_runner.go:130] > # log_size_max = -1
	I1024 19:25:46.983346   29716 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:25:46.983356   29716 command_runner.go:130] > # log_to_journald = false
	I1024 19:25:46.983367   29716 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:25:46.983379   29716 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:25:46.983391   29716 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:25:46.983403   29716 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:25:46.983413   29716 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:25:46.983423   29716 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:25:46.983434   29716 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:25:46.983443   29716 command_runner.go:130] > # read_only = false
	I1024 19:25:46.983453   29716 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:25:46.983466   29716 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:25:46.983477   29716 command_runner.go:130] > # live configuration reload.
	I1024 19:25:46.983487   29716 command_runner.go:130] > # log_level = "info"
	I1024 19:25:46.983500   29716 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:25:46.983509   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:25:46.983519   29716 command_runner.go:130] > # log_filter = ""
	I1024 19:25:46.983530   29716 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:25:46.983539   29716 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:25:46.983543   29716 command_runner.go:130] > # separated by comma.
	I1024 19:25:46.983547   29716 command_runner.go:130] > # uid_mappings = ""
	I1024 19:25:46.983555   29716 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:25:46.983561   29716 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:25:46.983568   29716 command_runner.go:130] > # separated by comma.
	I1024 19:25:46.983572   29716 command_runner.go:130] > # gid_mappings = ""
	I1024 19:25:46.983578   29716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:25:46.983586   29716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:25:46.983598   29716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:25:46.983607   29716 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:25:46.983621   29716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:25:46.983634   29716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:25:46.983659   29716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:25:46.983670   29716 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:25:46.983683   29716 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:25:46.983696   29716 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:25:46.983706   29716 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:25:46.983712   29716 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:25:46.983718   29716 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:25:46.983726   29716 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:25:46.983731   29716 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:25:46.983739   29716 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:25:46.983743   29716 command_runner.go:130] > drop_infra_ctr = false
	I1024 19:25:46.983752   29716 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:25:46.983758   29716 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:25:46.983789   29716 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:25:46.983800   29716 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:25:46.983810   29716 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:25:46.983822   29716 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:25:46.983830   29716 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:25:46.983844   29716 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:25:46.983852   29716 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1024 19:25:46.983859   29716 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:25:46.983868   29716 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:25:46.983874   29716 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:25:46.983880   29716 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:25:46.983886   29716 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:25:46.983893   29716 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:25:46.983903   29716 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:25:46.983909   29716 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:25:46.983917   29716 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:25:46.983924   29716 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:25:46.983929   29716 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:25:46.983935   29716 command_runner.go:130] > # ]
	I1024 19:25:46.983942   29716 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:25:46.983950   29716 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:25:46.983957   29716 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:25:46.983965   29716 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:25:46.983968   29716 command_runner.go:130] > #
	I1024 19:25:46.983973   29716 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:25:46.983978   29716 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:25:46.983986   29716 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:25:46.983994   29716 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:25:46.983998   29716 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:25:46.984005   29716 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:25:46.984009   29716 command_runner.go:130] > # Where:
	I1024 19:25:46.984015   29716 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:25:46.984021   29716 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:25:46.984031   29716 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:25:46.984037   29716 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:25:46.984041   29716 command_runner.go:130] > #   in $PATH.
	I1024 19:25:46.984048   29716 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:25:46.984055   29716 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:25:46.984060   29716 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:25:46.984064   29716 command_runner.go:130] > #   state.
	I1024 19:25:46.984070   29716 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:25:46.984078   29716 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:25:46.984084   29716 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:25:46.984090   29716 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:25:46.984098   29716 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:25:46.984104   29716 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:25:46.984111   29716 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:25:46.984117   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:25:46.984126   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:25:46.984135   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:25:46.984141   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:25:46.984150   29716 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:25:46.984158   29716 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:25:46.984169   29716 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:25:46.984183   29716 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:25:46.984193   29716 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:25:46.984197   29716 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:25:46.984204   29716 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1024 19:25:46.984208   29716 command_runner.go:130] > runtime_type = "oci"
	I1024 19:25:46.984215   29716 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:25:46.984220   29716 command_runner.go:130] > runtime_config_path = ""
	I1024 19:25:46.984223   29716 command_runner.go:130] > monitor_path = ""
	I1024 19:25:46.984229   29716 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:25:46.984234   29716 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:25:46.984242   29716 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:25:46.984246   29716 command_runner.go:130] > # running containers
	I1024 19:25:46.984256   29716 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:25:46.984271   29716 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:25:46.984298   29716 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:25:46.984307   29716 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:25:46.984331   29716 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:25:46.984342   29716 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:25:46.984352   29716 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:25:46.984363   29716 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:25:46.984373   29716 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:25:46.984381   29716 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:25:46.984388   29716 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:25:46.984395   29716 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:25:46.984401   29716 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:25:46.984411   29716 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:25:46.984418   29716 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:25:46.984426   29716 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:25:46.984434   29716 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:25:46.984444   29716 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:25:46.984452   29716 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:25:46.984459   29716 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:25:46.984464   29716 command_runner.go:130] > # Example:
	I1024 19:25:46.984469   29716 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:25:46.984474   29716 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:25:46.984481   29716 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:25:46.984489   29716 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:25:46.984496   29716 command_runner.go:130] > # cpuset = 0
	I1024 19:25:46.984500   29716 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:25:46.984506   29716 command_runner.go:130] > # Where:
	I1024 19:25:46.984510   29716 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:25:46.984519   29716 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:25:46.984528   29716 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:25:46.984536   29716 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:25:46.984548   29716 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:25:46.984556   29716 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:25:46.984559   29716 command_runner.go:130] > # 
	I1024 19:25:46.984568   29716 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:25:46.984574   29716 command_runner.go:130] > #
	I1024 19:25:46.984580   29716 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:25:46.984588   29716 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:25:46.984596   29716 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:25:46.984604   29716 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:25:46.984612   29716 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:25:46.984618   29716 command_runner.go:130] > [crio.image]
	I1024 19:25:46.984624   29716 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:25:46.984629   29716 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:25:46.984637   29716 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:25:46.984644   29716 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:25:46.984650   29716 command_runner.go:130] > # global_auth_file = ""
	I1024 19:25:46.984655   29716 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:25:46.984662   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:25:46.984669   29716 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:25:46.984676   29716 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:25:46.984684   29716 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:25:46.984691   29716 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:25:46.984697   29716 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:25:46.984709   29716 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:25:46.984722   29716 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:25:46.984730   29716 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:25:46.984737   29716 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:25:46.984744   29716 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:25:46.984751   29716 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:25:46.984760   29716 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:25:46.984773   29716 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:25:46.984785   29716 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:25:46.984797   29716 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:25:46.984808   29716 command_runner.go:130] > # signature_policy = ""
	I1024 19:25:46.984818   29716 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:25:46.984824   29716 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:25:46.984831   29716 command_runner.go:130] > # changing them here.
	I1024 19:25:46.984835   29716 command_runner.go:130] > # insecure_registries = [
	I1024 19:25:46.984841   29716 command_runner.go:130] > # ]
	I1024 19:25:46.984848   29716 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:25:46.984855   29716 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:25:46.984862   29716 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:25:46.984886   29716 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:25:46.984897   29716 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:25:46.984911   29716 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:25:46.984921   29716 command_runner.go:130] > # CNI plugins.
	I1024 19:25:46.984926   29716 command_runner.go:130] > [crio.network]
	I1024 19:25:46.984932   29716 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:25:46.984940   29716 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:25:46.984945   29716 command_runner.go:130] > # cni_default_network = ""
	I1024 19:25:46.984951   29716 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:25:46.984956   29716 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:25:46.984964   29716 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:25:46.984968   29716 command_runner.go:130] > # plugin_dirs = [
	I1024 19:25:46.984973   29716 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:25:46.984976   29716 command_runner.go:130] > # ]
	I1024 19:25:46.984988   29716 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:25:46.984995   29716 command_runner.go:130] > [crio.metrics]
	I1024 19:25:46.985000   29716 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:25:46.985006   29716 command_runner.go:130] > enable_metrics = true
	I1024 19:25:46.985010   29716 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:25:46.985016   29716 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:25:46.985025   29716 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:25:46.985031   29716 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:25:46.985037   29716 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:25:46.985044   29716 command_runner.go:130] > # metrics_collectors = [
	I1024 19:25:46.985048   29716 command_runner.go:130] > # 	"operations",
	I1024 19:25:46.985053   29716 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:25:46.985058   29716 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:25:46.985063   29716 command_runner.go:130] > # 	"operations_errors",
	I1024 19:25:46.985067   29716 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:25:46.985074   29716 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:25:46.985079   29716 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:25:46.985084   29716 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:25:46.985088   29716 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:25:46.985094   29716 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:25:46.985099   29716 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:25:46.985105   29716 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:25:46.985110   29716 command_runner.go:130] > # 	"containers_oom",
	I1024 19:25:46.985116   29716 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:25:46.985120   29716 command_runner.go:130] > # 	"operations_total",
	I1024 19:25:46.985124   29716 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:25:46.985131   29716 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:25:46.985135   29716 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:25:46.985143   29716 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:25:46.985150   29716 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:25:46.985154   29716 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:25:46.985161   29716 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:25:46.985165   29716 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:25:46.985172   29716 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:25:46.985175   29716 command_runner.go:130] > # ]
	I1024 19:25:46.985181   29716 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:25:46.985187   29716 command_runner.go:130] > # metrics_port = 9090
	I1024 19:25:46.985192   29716 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:25:46.985198   29716 command_runner.go:130] > # metrics_socket = ""
	I1024 19:25:46.985203   29716 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:25:46.985211   29716 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:25:46.985217   29716 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:25:46.985222   29716 command_runner.go:130] > # certificate on any modification event.
	I1024 19:25:46.985228   29716 command_runner.go:130] > # metrics_cert = ""
	I1024 19:25:46.985234   29716 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:25:46.985241   29716 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:25:46.985245   29716 command_runner.go:130] > # metrics_key = ""
	I1024 19:25:46.985252   29716 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:25:46.985256   29716 command_runner.go:130] > [crio.tracing]
	I1024 19:25:46.985263   29716 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:25:46.985270   29716 command_runner.go:130] > # enable_tracing = false
	I1024 19:25:46.985275   29716 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:25:46.985280   29716 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:25:46.985286   29716 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:25:46.985305   29716 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:25:46.985316   29716 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:25:46.985326   29716 command_runner.go:130] > [crio.stats]
	I1024 19:25:46.985335   29716 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:25:46.985348   29716 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:25:46.985356   29716 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:25:46.985670   29716 command_runner.go:130] ! time="2023-10-24 19:25:46.948883240Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1024 19:25:46.985691   29716 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:25:46.985870   29716 cni.go:84] Creating CNI manager for ""
	I1024 19:25:46.985883   29716 cni.go:136] 2 nodes found, recommending kindnet
	I1024 19:25:46.985893   29716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:25:46.985917   29716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-632589 NodeName:multinode-632589-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:25:46.986067   29716 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-632589-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:25:46.986132   29716 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-632589-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:25:46.986193   29716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:25:46.994602   29716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1024 19:25:46.994745   29716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1024 19:25:46.994814   29716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1024 19:25:47.006195   29716 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1024 19:25:47.006204   29716 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1024 19:25:47.006207   29716 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1024 19:25:47.006221   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1024 19:25:47.006294   29716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1024 19:25:47.010601   29716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1024 19:25:47.010636   29716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1024 19:25:47.010656   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1024 19:25:48.325726   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:25:48.338376   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubelet -> /var/lib/minikube/binaries/v1.28.3/kubelet
	I1024 19:25:48.338467   29716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet
	I1024 19:25:48.342994   29716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1024 19:25:48.343030   29716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1024 19:25:48.343058   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubelet --> /var/lib/minikube/binaries/v1.28.3/kubelet (110780416 bytes)
	I1024 19:25:48.471404   29716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1024 19:25:48.471494   29716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1024 19:25:48.507850   29716 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1024 19:25:48.507908   29716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1024 19:25:48.507935   29716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1024 19:25:49.048480   29716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1024 19:25:49.056855   29716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1024 19:25:49.073650   29716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:25:49.088838   29716 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I1024 19:25:49.092881   29716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:25:49.105128   29716 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:25:49.105392   29716 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:25:49.105544   29716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:25:49.105590   29716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:49.119465   29716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40923
	I1024 19:25:49.119893   29716 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:49.120321   29716 main.go:141] libmachine: Using API Version  1
	I1024 19:25:49.120340   29716 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:49.120640   29716 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:49.120814   29716 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:25:49.120964   29716 start.go:304] JoinCluster: &{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:25:49.121088   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1024 19:25:49.121108   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:25:49.123870   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:25:49.124260   29716 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:25:49.124290   29716 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:25:49.124434   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:25:49.124604   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:25:49.124743   29716 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:25:49.124887   29716 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:25:49.281600   29716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 698u8j.n5vwvi1sbp6eodbl --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:25:49.288098   29716 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:25:49.288144   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 698u8j.n5vwvi1sbp6eodbl --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-632589-m02"
	I1024 19:25:49.330389   29716 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 19:25:49.475877   29716 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1024 19:25:49.475907   29716 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1024 19:25:49.507469   29716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:25:49.507593   29716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:25:49.507613   29716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:25:49.619942   29716 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1024 19:25:52.140056   29716 command_runner.go:130] > This node has joined the cluster:
	I1024 19:25:52.140084   29716 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1024 19:25:52.140094   29716 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1024 19:25:52.140104   29716 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1024 19:25:52.142092   29716 command_runner.go:130] ! W1024 19:25:49.305814     822 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1024 19:25:52.142116   29716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:25:52.142174   29716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 698u8j.n5vwvi1sbp6eodbl --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-632589-m02": (2.854015949s)
	I1024 19:25:52.142198   29716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1024 19:25:52.401019   29716 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1024 19:25:52.401125   29716 start.go:306] JoinCluster complete in 3.280158276s
	I1024 19:25:52.401151   29716 cni.go:84] Creating CNI manager for ""
	I1024 19:25:52.401159   29716 cni.go:136] 2 nodes found, recommending kindnet
	I1024 19:25:52.401220   29716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:25:52.407551   29716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:25:52.407580   29716 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1024 19:25:52.407591   29716 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1024 19:25:52.407602   29716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:25:52.407615   29716 command_runner.go:130] > Access: 2023-10-24 19:24:24.936689766 +0000
	I1024 19:25:52.407628   29716 command_runner.go:130] > Modify: 2023-10-16 21:25:26.000000000 +0000
	I1024 19:25:52.407641   29716 command_runner.go:130] > Change: 2023-10-24 19:24:23.066689766 +0000
	I1024 19:25:52.407651   29716 command_runner.go:130] >  Birth: -
	I1024 19:25:52.407716   29716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:25:52.407732   29716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:25:52.426257   29716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:25:52.720646   29716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:25:52.725858   29716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:25:52.728201   29716 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1024 19:25:52.746068   29716 command_runner.go:130] > daemonset.apps/kindnet configured
	I1024 19:25:52.748965   29716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:25:52.749218   29716 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:25:52.749574   29716 round_trippers.go:463] GET https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:25:52.749592   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:52.749603   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:52.749615   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:52.753591   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:52.753612   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:52.753622   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:52 GMT
	I1024 19:25:52.753630   29716 round_trippers.go:580]     Audit-Id: ed16c98a-e7d1-47da-9d4e-904eb1a95352
	I1024 19:25:52.753641   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:52.753648   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:52.753660   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:52.753668   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:52.753680   29716 round_trippers.go:580]     Content-Length: 291
	I1024 19:25:52.753838   29716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"440","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1024 19:25:52.753947   29716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-632589" context rescaled to 1 replicas
	I1024 19:25:52.753981   29716 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:25:52.756863   29716 out.go:177] * Verifying Kubernetes components...
	I1024 19:25:52.758428   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:25:52.778903   29716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:25:52.779175   29716 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:25:52.779383   29716 node_ready.go:35] waiting up to 6m0s for node "multinode-632589-m02" to be "Ready" ...
	I1024 19:25:52.779440   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:52.779448   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:52.779455   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:52.779464   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:52.782830   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:52.782852   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:52.782863   29716 round_trippers.go:580]     Audit-Id: d3bcea72-7f93-4729-8401-319f44744bbb
	I1024 19:25:52.782872   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:52.782881   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:52.782893   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:52.782903   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:52.782914   29716 round_trippers.go:580]     Content-Length: 3531
	I1024 19:25:52.782925   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:52 GMT
	I1024 19:25:52.783019   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"489","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1024 19:25:52.783334   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:52.783352   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:52.783362   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:52.783371   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:52.785542   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:52.785565   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:52.785575   29716 round_trippers.go:580]     Content-Length: 3531
	I1024 19:25:52.785589   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:52 GMT
	I1024 19:25:52.785598   29716 round_trippers.go:580]     Audit-Id: 9f718cc7-fcc5-41cc-bcbc-e173285eadf5
	I1024 19:25:52.785607   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:52.785623   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:52.785632   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:52.785640   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:52.785750   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"489","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1024 19:25:53.286778   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:53.286804   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:53.286815   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:53.286825   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:53.290572   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:53.290598   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:53.290609   29716 round_trippers.go:580]     Content-Length: 3531
	I1024 19:25:53.290618   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:53 GMT
	I1024 19:25:53.290627   29716 round_trippers.go:580]     Audit-Id: 25e0115d-3bc2-49b1-8276-eb09dfbcde0e
	I1024 19:25:53.290635   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:53.290643   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:53.290653   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:53.290664   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:53.290733   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"489","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1024 19:25:53.786277   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:53.786301   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:53.786309   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:53.786315   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:53.788894   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:53.788911   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:53.788922   29716 round_trippers.go:580]     Audit-Id: 351100fe-e3ec-4cd8-a163-df9f17430607
	I1024 19:25:53.788930   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:53.788941   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:53.788947   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:53.788955   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:53.788963   29716 round_trippers.go:580]     Content-Length: 3531
	I1024 19:25:53.788968   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:53 GMT
	I1024 19:25:53.789022   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"489","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1024 19:25:54.287097   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:54.287129   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:54.287138   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:54.287144   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:54.432646   29716 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I1024 19:25:54.432679   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:54.432691   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:54.432699   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:54.432708   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:54.432716   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:54 GMT
	I1024 19:25:54.432725   29716 round_trippers.go:580]     Audit-Id: ceb1ba03-d8d7-4fec-bbb9-9e1fea096501
	I1024 19:25:54.432733   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:54.432741   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:54.432857   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:54.786156   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:54.786182   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:54.786195   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:54.786204   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:54.792651   29716 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1024 19:25:54.792677   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:54.792688   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:54 GMT
	I1024 19:25:54.792698   29716 round_trippers.go:580]     Audit-Id: d46d8e22-67bf-4147-b7ac-b493934433d2
	I1024 19:25:54.792708   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:54.792720   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:54.792732   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:54.792748   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:54.792760   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:54.792860   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:54.793147   29716 node_ready.go:58] node "multinode-632589-m02" has status "Ready":"False"
	I1024 19:25:55.286323   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:55.286349   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:55.286363   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:55.286373   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:55.289060   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:55.289084   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:55.289092   29716 round_trippers.go:580]     Audit-Id: acba6cbb-359d-4405-a38a-49686f73f890
	I1024 19:25:55.289098   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:55.289104   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:55.289113   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:55.289121   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:55.289130   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:55.289139   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:55 GMT
	I1024 19:25:55.289272   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:55.787142   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:55.787163   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:55.787171   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:55.787178   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:55.789905   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:55.789926   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:55.789936   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:55.789942   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:55.789947   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:55 GMT
	I1024 19:25:55.789957   29716 round_trippers.go:580]     Audit-Id: 61794605-486d-4aa4-9af8-fd55edd64a43
	I1024 19:25:55.789965   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:55.789974   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:55.789985   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:55.790113   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:56.286416   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:56.286437   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:56.286448   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:56.286456   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:56.289389   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:56.289416   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:56.289426   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:56.289435   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:56.289442   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:56 GMT
	I1024 19:25:56.289458   29716 round_trippers.go:580]     Audit-Id: 38e6305c-d36b-4ad0-97ad-8fa4d48d60b0
	I1024 19:25:56.289471   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:56.289481   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:56.289493   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:56.289567   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:56.786375   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:56.786396   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:56.786408   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:56.786416   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:56.790817   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:56.790836   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:56.790844   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:56.790852   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:56.790861   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:56.790875   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:56.790885   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:56.790903   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:56 GMT
	I1024 19:25:56.790911   29716 round_trippers.go:580]     Audit-Id: bba20657-8810-4994-b1ed-00c01b60c7a5
	I1024 19:25:56.790969   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:57.286541   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:57.286562   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:57.286573   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:57.286583   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:57.289990   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:57.290012   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:57.290019   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:57.290025   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:57 GMT
	I1024 19:25:57.290030   29716 round_trippers.go:580]     Audit-Id: d86a6101-3881-43be-9111-1639d1e5a49a
	I1024 19:25:57.290035   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:57.290044   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:57.290052   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:57.290063   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:57.290140   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:57.290371   29716 node_ready.go:58] node "multinode-632589-m02" has status "Ready":"False"
	I1024 19:25:57.786633   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:57.786658   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:57.786684   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:57.786694   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:57.790164   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:57.790186   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:57.790195   29716 round_trippers.go:580]     Audit-Id: 823c7aea-6914-43ec-8162-a8b4960b350d
	I1024 19:25:57.790203   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:57.790211   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:57.790220   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:57.790234   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:57.790246   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:57.790258   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:57 GMT
	I1024 19:25:57.790506   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:58.287175   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:58.287197   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:58.287206   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:58.287216   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:58.291059   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:58.291084   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:58.291094   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:58.291103   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:58.291111   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:58 GMT
	I1024 19:25:58.291120   29716 round_trippers.go:580]     Audit-Id: 9ec2ba07-6d55-41cf-9534-d8302c0867d8
	I1024 19:25:58.291128   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:58.291135   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:58.291146   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:58.291372   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:58.786628   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:58.786655   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:58.786668   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:58.786678   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:58.789730   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:58.789753   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:58.789766   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:58.789776   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:58.789784   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:58.789790   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:58.789797   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:58.789810   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:58 GMT
	I1024 19:25:58.789818   29716 round_trippers.go:580]     Audit-Id: f9fb1838-5513-4a67-bb6a-7fbb2e6e61bc
	I1024 19:25:58.789927   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:59.287083   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:59.287106   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.287114   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.287120   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.292285   29716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:25:59.292318   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.292329   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.292339   29716 round_trippers.go:580]     Content-Length: 3640
	I1024 19:25:59.292347   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.292355   29716 round_trippers.go:580]     Audit-Id: 66069f18-dacb-45d0-8edf-6e867a9cb67c
	I1024 19:25:59.292364   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.292373   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.292385   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.292456   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"496","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1024 19:25:59.292746   29716 node_ready.go:58] node "multinode-632589-m02" has status "Ready":"False"
	I1024 19:25:59.787130   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:25:59.787154   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.787166   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.787176   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.790694   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:59.790711   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.790718   29716 round_trippers.go:580]     Audit-Id: 40926c7a-b8b6-4f3e-a9aa-b6b6baaba9e8
	I1024 19:25:59.790724   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.790729   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.790734   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.790739   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.790744   29716 round_trippers.go:580]     Content-Length: 3726
	I1024 19:25:59.790749   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.791020   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"515","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2702 chars]
	I1024 19:25:59.791295   29716 node_ready.go:49] node "multinode-632589-m02" has status "Ready":"True"
	I1024 19:25:59.791312   29716 node_ready.go:38] duration metric: took 7.011914438s waiting for node "multinode-632589-m02" to be "Ready" ...
	I1024 19:25:59.791325   29716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:59.791390   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:25:59.791395   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.791406   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.791416   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.795843   29716 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:25:59.795863   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.795872   29716 round_trippers.go:580]     Audit-Id: 880b7bb2-89a2-4799-9524-55c362ca4fb7
	I1024 19:25:59.795886   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.795895   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.795903   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.795911   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.795922   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.796956   29716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"515"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"436","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67370 chars]
	I1024 19:25:59.799004   29716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.799076   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:25:59.799086   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.799093   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.799099   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.801666   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:59.801684   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.801694   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.801703   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.801710   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.801718   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.801726   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.801739   29716 round_trippers.go:580]     Audit-Id: 73e9699c-a9ed-42c4-882f-7a285844a31d
	I1024 19:25:59.801966   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"436","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1024 19:25:59.802354   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:59.802368   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.802375   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.802382   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.805978   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:59.805995   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.806005   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.806013   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.806024   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.806032   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.806040   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.806052   29716 round_trippers.go:580]     Audit-Id: 1a444f58-1d0f-488c-bd39-378015794449
	I1024 19:25:59.806195   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:59.806440   29716 pod_ready.go:92] pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:59.806461   29716 pod_ready.go:81] duration metric: took 7.427578ms waiting for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.806469   29716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.806512   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-632589
	I1024 19:25:59.806518   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.806525   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.806534   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.808613   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:59.808625   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.808630   29716 round_trippers.go:580]     Audit-Id: 02f47ef8-ee62-45de-af61-9875ae7de930
	I1024 19:25:59.808635   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.808641   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.808645   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.808653   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.808662   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.808807   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"290","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1024 19:25:59.809099   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:59.809109   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.809115   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.809121   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.812286   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:25:59.812302   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.812309   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.812314   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.812319   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.812324   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.812329   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.812334   29716 round_trippers.go:580]     Audit-Id: 3fbadbe8-851d-4b22-95f0-64e41fa91c90
	I1024 19:25:59.812594   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:59.812827   29716 pod_ready.go:92] pod "etcd-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:59.812837   29716 pod_ready.go:81] duration metric: took 6.360252ms waiting for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.812849   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.812885   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:25:59.812892   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.812899   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.812904   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.814565   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:59.814579   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.814585   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.814590   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.814595   29716 round_trippers.go:580]     Audit-Id: 7cf66e4e-b75b-4d09-8962-116b25338dfe
	I1024 19:25:59.814600   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.814607   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.814615   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.814765   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"292","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1024 19:25:59.815069   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:59.815079   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.815085   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.815091   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.816884   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:59.816925   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.816935   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.816945   29716 round_trippers.go:580]     Audit-Id: a4c16105-0b39-44c3-9251-a7c3d2628387
	I1024 19:25:59.816953   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.816964   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.816973   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.816988   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.817130   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:59.817494   29716 pod_ready.go:92] pod "kube-apiserver-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:59.817512   29716 pod_ready.go:81] duration metric: took 4.656614ms waiting for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.817523   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.817577   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-632589
	I1024 19:25:59.817588   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.817599   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.817611   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.819456   29716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:25:59.819469   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.819478   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.819484   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.819489   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.819494   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.819499   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.819504   29716 round_trippers.go:580]     Audit-Id: 19cf383c-4b7f-4774-b28d-b8892781c4eb
	I1024 19:25:59.819780   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-632589","namespace":"kube-system","uid":"6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6","resourceVersion":"297","creationTimestamp":"2023-10-24T19:24:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.mirror":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.seen":"2023-10-24T19:24:47.530352200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1024 19:25:59.820127   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:25:59.820140   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.820146   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.820152   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.822408   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:59.822426   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.822435   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.822444   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.822452   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.822460   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.822466   29716 round_trippers.go:580]     Audit-Id: cd66af15-f149-4f1c-9871-31e0cc8b85a1
	I1024 19:25:59.822474   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.822730   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:25:59.822972   29716 pod_ready.go:92] pod "kube-controller-manager-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:59.822984   29716 pod_ready.go:81] duration metric: took 5.454886ms waiting for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.822993   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:59.987300   29716 request.go:629] Waited for 164.26089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:25:59.987351   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:25:59.987368   29716 round_trippers.go:469] Request Headers:
	I1024 19:25:59.987375   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:25:59.987381   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:25:59.990068   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:25:59.990183   29716 round_trippers.go:577] Response Headers:
	I1024 19:25:59.990197   29716 round_trippers.go:580]     Audit-Id: a456db62-6c21-484f-8b61-517b2f0a27dc
	I1024 19:25:59.990207   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:25:59.990223   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:25:59.990234   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:25:59.990244   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:25:59.990258   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:25:59 GMT
	I1024 19:25:59.990398   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"505","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1024 19:26:00.188175   29716 request.go:629] Waited for 197.364108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:26:00.188239   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:26:00.188244   29716 round_trippers.go:469] Request Headers:
	I1024 19:26:00.188251   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:26:00.188257   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:26:00.190875   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:26:00.190897   29716 round_trippers.go:577] Response Headers:
	I1024 19:26:00.190905   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:26:00.190910   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:26:00.190915   29716 round_trippers.go:580]     Content-Length: 3726
	I1024 19:26:00.190926   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:26:00 GMT
	I1024 19:26:00.190937   29716 round_trippers.go:580]     Audit-Id: e46f6266-b881-4ff6-a55d-d07ee90b71f0
	I1024 19:26:00.190945   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:26:00.190953   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:26:00.191031   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"515","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2702 chars]
	I1024 19:26:00.191423   29716 pod_ready.go:92] pod "kube-proxy-6vn7s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:26:00.191450   29716 pod_ready.go:81] duration metric: took 368.451015ms waiting for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:26:00.191464   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:26:00.387632   29716 request.go:629] Waited for 196.107865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:26:00.387699   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:26:00.387704   29716 round_trippers.go:469] Request Headers:
	I1024 19:26:00.387711   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:26:00.387717   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:26:00.390382   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:26:00.390400   29716 round_trippers.go:577] Response Headers:
	I1024 19:26:00.390407   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:26:00.390412   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:26:00.390418   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:26:00.390422   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:26:00 GMT
	I1024 19:26:00.390428   29716 round_trippers.go:580]     Audit-Id: 768c3674-d639-45a2-b988-d9c920991900
	I1024 19:26:00.390433   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:26:00.390659   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gd49s","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1c573fd-3f4b-4d90-a366-6d859a121185","resourceVersion":"408","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:26:00.587405   29716 request.go:629] Waited for 196.338127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:26:00.587463   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:26:00.587470   29716 round_trippers.go:469] Request Headers:
	I1024 19:26:00.587479   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:26:00.587487   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:26:00.590362   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:26:00.590378   29716 round_trippers.go:577] Response Headers:
	I1024 19:26:00.590384   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:26:00.590389   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:26:00 GMT
	I1024 19:26:00.590396   29716 round_trippers.go:580]     Audit-Id: 6fe7acf7-be28-4ede-8b0a-0a7acab86f8a
	I1024 19:26:00.590401   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:26:00.590407   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:26:00.590412   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:26:00.590615   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:26:00.590896   29716 pod_ready.go:92] pod "kube-proxy-gd49s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:26:00.590909   29716 pod_ready.go:81] duration metric: took 399.437251ms waiting for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:26:00.590918   29716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:26:00.787318   29716 request.go:629] Waited for 196.318711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:26:00.787377   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:26:00.787382   29716 round_trippers.go:469] Request Headers:
	I1024 19:26:00.787389   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:26:00.787395   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:26:00.790729   29716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:26:00.790748   29716 round_trippers.go:577] Response Headers:
	I1024 19:26:00.790754   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:26:00 GMT
	I1024 19:26:00.790760   29716 round_trippers.go:580]     Audit-Id: 80a83328-b04f-4c18-b545-4117ac4c1637
	I1024 19:26:00.790768   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:26:00.790776   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:26:00.790784   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:26:00.790794   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:26:00.790991   29716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-632589","namespace":"kube-system","uid":"e85a7c19-1a25-42f5-81bd-16ed7070ca3c","resourceVersion":"294","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.mirror":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.seen":"2023-10-24T19:24:56.213306721Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1024 19:26:00.987696   29716 request.go:629] Waited for 196.361299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:26:00.987770   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:26:00.987776   29716 round_trippers.go:469] Request Headers:
	I1024 19:26:00.987784   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:26:00.987791   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:26:00.990553   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:26:00.990577   29716 round_trippers.go:577] Response Headers:
	I1024 19:26:00.990585   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:26:00.990593   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:26:00.990605   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:26:00.990627   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:26:00 GMT
	I1024 19:26:00.990640   29716 round_trippers.go:580]     Audit-Id: 3976e4fa-eb26-4005-a83c-1ecfccffae46
	I1024 19:26:00.990650   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:26:00.990771   29716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1024 19:26:00.991095   29716 pod_ready.go:92] pod "kube-scheduler-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:26:00.991110   29716 pod_ready.go:81] duration metric: took 400.186803ms waiting for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:26:00.991122   29716 pod_ready.go:38] duration metric: took 1.199784894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:26:00.991136   29716 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:26:00.991188   29716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:26:01.006070   29716 system_svc.go:56] duration metric: took 14.917939ms WaitForService to wait for kubelet.
	I1024 19:26:01.006095   29716 kubeadm.go:581] duration metric: took 8.252086931s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:26:01.006114   29716 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:26:01.187513   29716 request.go:629] Waited for 181.324635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes
	I1024 19:26:01.187560   29716 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes
	I1024 19:26:01.187565   29716 round_trippers.go:469] Request Headers:
	I1024 19:26:01.187573   29716 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:26:01.187579   29716 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:26:01.190320   29716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:26:01.190339   29716 round_trippers.go:577] Response Headers:
	I1024 19:26:01.190345   29716 round_trippers.go:580]     Content-Type: application/json
	I1024 19:26:01.190351   29716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:26:01.190357   29716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:26:01.190362   29716 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:26:01 GMT
	I1024 19:26:01.190367   29716 round_trippers.go:580]     Audit-Id: 774e27f3-0c04-41d9-be83-9a081b63e08f
	I1024 19:26:01.190372   29716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:26:01.190560   29716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"420","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I1024 19:26:01.190948   29716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:26:01.190963   29716 node_conditions.go:123] node cpu capacity is 2
	I1024 19:26:01.190972   29716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:26:01.190976   29716 node_conditions.go:123] node cpu capacity is 2
	I1024 19:26:01.190980   29716 node_conditions.go:105] duration metric: took 184.861707ms to run NodePressure ...
	I1024 19:26:01.191005   29716 start.go:228] waiting for startup goroutines ...
	I1024 19:26:01.191026   29716 start.go:242] writing updated cluster config ...
	I1024 19:26:01.191294   29716 ssh_runner.go:195] Run: rm -f paused
	I1024 19:26:01.236843   29716 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:26:01.239662   29716 out.go:177] * Done! kubectl is now configured to use "multinode-632589" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:24:23 UTC, ends at Tue 2023-10-24 19:26:07 UTC. --
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.828961742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175567828946518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=36ac609f-d877-40b3-b904-d70d2b6e6296 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.829619195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f92a568-6362-4a74-a503-918668b46f23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.829667283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f92a568-6362-4a74-a503-918668b46f23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.829898959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601072940a7880fdfb5fcbddd384fc7dcd73c5a0861d725c6c9d1fc036a33ee9,PodSandboxId:2ca6c566518b6081df0bb7caf0655ca032095cdaae753c7fc2c882725603e9ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698175563870243927,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd2b2ba21a5fe632ce90e990c920b9581903ca068b458c20cbdd84cdcde68f,PodSandboxId:a8cfeb977e7a154c1ff0588feb2f115a3d1cb06dd0221b13c8aeda5add01b706,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698175517779415430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96453acfe6a36f93a8a8b2f539c808aa48ae923768ee2a237b9007fd92e1374f,PodSandboxId:2c9a63de8b58b003c0da744330d9d96c33aaec0bf51a50b8333c3509326039f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698175517556008960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ad495d5fbfcb2e80e1521c1b7eb556b65885ea0fa2616ee4f922bbbb380a48,PodSandboxId:aa40f5338484e55a8b52dba1cc0e8d04d370bb7cedf82626e81cb7968584b596,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698175514753864700,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c4eeb3ad883bdf9490a5f37ef0d81539041e67e4e986bb5750dce8c088ed03,PodSandboxId:21b512ab863a81b159a18e330484f115c323ef0c3c7798dc08297ce60746805c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698175512457235601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a583da9294cb4fdec84185ceaceb5e1357e9b7362d7bab82f219690e2c8f2d98,PodSandboxId:5e3710cccd08ffdbf8160f8638163f930902be4039ba749e039c66d66943fb22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698175489335339840,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a436bf8decce624ff573fb3778d6750ddd441dfd642b30a481b05f0c7a5ac565,PodSandboxId:fd112246a92ecbf23f66a4388aa945aa0515c0cc91c670180368824f6846cde9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698175488914859843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.h
ash: c1b3fcc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9,PodSandboxId:0f073184917249ca980d2a008e4284c03197bab1a5b853b27360f4e6a4f865d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698175488655535042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.container.hash: 2e7911d
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eccdcf8f718a5f523ad290afa414923d4bc3821b6c435171605b3dae31657,PodSandboxId:2cb5555463cac109706e0731e5ef4c0e985f0742f83e128a133f9189fe1ee90f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698175488609304077,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.kubernetes.
container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f92a568-6362-4a74-a503-918668b46f23 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.869143227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=afe90590-2855-4eef-a4b2-7c203b52de2d name=/runtime.v1.RuntimeService/Version
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.869226240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=afe90590-2855-4eef-a4b2-7c203b52de2d name=/runtime.v1.RuntimeService/Version
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.870177314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=402cf99f-2c7c-43db-bead-0a2c74a243d2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.870552183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175567870537845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=402cf99f-2c7c-43db-bead-0a2c74a243d2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.871167092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e3c88739-c80a-4231-ac1e-061aa0da99bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.871241340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e3c88739-c80a-4231-ac1e-061aa0da99bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.871432258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601072940a7880fdfb5fcbddd384fc7dcd73c5a0861d725c6c9d1fc036a33ee9,PodSandboxId:2ca6c566518b6081df0bb7caf0655ca032095cdaae753c7fc2c882725603e9ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698175563870243927,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd2b2ba21a5fe632ce90e990c920b9581903ca068b458c20cbdd84cdcde68f,PodSandboxId:a8cfeb977e7a154c1ff0588feb2f115a3d1cb06dd0221b13c8aeda5add01b706,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698175517779415430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96453acfe6a36f93a8a8b2f539c808aa48ae923768ee2a237b9007fd92e1374f,PodSandboxId:2c9a63de8b58b003c0da744330d9d96c33aaec0bf51a50b8333c3509326039f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698175517556008960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ad495d5fbfcb2e80e1521c1b7eb556b65885ea0fa2616ee4f922bbbb380a48,PodSandboxId:aa40f5338484e55a8b52dba1cc0e8d04d370bb7cedf82626e81cb7968584b596,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698175514753864700,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c4eeb3ad883bdf9490a5f37ef0d81539041e67e4e986bb5750dce8c088ed03,PodSandboxId:21b512ab863a81b159a18e330484f115c323ef0c3c7798dc08297ce60746805c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698175512457235601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a583da9294cb4fdec84185ceaceb5e1357e9b7362d7bab82f219690e2c8f2d98,PodSandboxId:5e3710cccd08ffdbf8160f8638163f930902be4039ba749e039c66d66943fb22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698175489335339840,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a436bf8decce624ff573fb3778d6750ddd441dfd642b30a481b05f0c7a5ac565,PodSandboxId:fd112246a92ecbf23f66a4388aa945aa0515c0cc91c670180368824f6846cde9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698175488914859843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.h
ash: c1b3fcc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9,PodSandboxId:0f073184917249ca980d2a008e4284c03197bab1a5b853b27360f4e6a4f865d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698175488655535042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.container.hash: 2e7911d
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eccdcf8f718a5f523ad290afa414923d4bc3821b6c435171605b3dae31657,PodSandboxId:2cb5555463cac109706e0731e5ef4c0e985f0742f83e128a133f9189fe1ee90f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698175488609304077,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.kubernetes.
container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e3c88739-c80a-4231-ac1e-061aa0da99bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.908260276Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=25d94dca-6b6d-4467-a263-86d39e5186d3 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.908352167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=25d94dca-6b6d-4467-a263-86d39e5186d3 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.909400083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b6d991c2-80c7-4495-b6f4-8927bfa38c8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.909912630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175567909899821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b6d991c2-80c7-4495-b6f4-8927bfa38c8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.910516743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=36aabc08-ea8f-4cc0-97d7-9e53cfb25e4d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.910589923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=36aabc08-ea8f-4cc0-97d7-9e53cfb25e4d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.910855408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601072940a7880fdfb5fcbddd384fc7dcd73c5a0861d725c6c9d1fc036a33ee9,PodSandboxId:2ca6c566518b6081df0bb7caf0655ca032095cdaae753c7fc2c882725603e9ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698175563870243927,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd2b2ba21a5fe632ce90e990c920b9581903ca068b458c20cbdd84cdcde68f,PodSandboxId:a8cfeb977e7a154c1ff0588feb2f115a3d1cb06dd0221b13c8aeda5add01b706,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698175517779415430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96453acfe6a36f93a8a8b2f539c808aa48ae923768ee2a237b9007fd92e1374f,PodSandboxId:2c9a63de8b58b003c0da744330d9d96c33aaec0bf51a50b8333c3509326039f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698175517556008960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ad495d5fbfcb2e80e1521c1b7eb556b65885ea0fa2616ee4f922bbbb380a48,PodSandboxId:aa40f5338484e55a8b52dba1cc0e8d04d370bb7cedf82626e81cb7968584b596,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698175514753864700,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c4eeb3ad883bdf9490a5f37ef0d81539041e67e4e986bb5750dce8c088ed03,PodSandboxId:21b512ab863a81b159a18e330484f115c323ef0c3c7798dc08297ce60746805c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698175512457235601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a583da9294cb4fdec84185ceaceb5e1357e9b7362d7bab82f219690e2c8f2d98,PodSandboxId:5e3710cccd08ffdbf8160f8638163f930902be4039ba749e039c66d66943fb22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698175489335339840,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a436bf8decce624ff573fb3778d6750ddd441dfd642b30a481b05f0c7a5ac565,PodSandboxId:fd112246a92ecbf23f66a4388aa945aa0515c0cc91c670180368824f6846cde9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698175488914859843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.h
ash: c1b3fcc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9,PodSandboxId:0f073184917249ca980d2a008e4284c03197bab1a5b853b27360f4e6a4f865d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698175488655535042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.container.hash: 2e7911d
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eccdcf8f718a5f523ad290afa414923d4bc3821b6c435171605b3dae31657,PodSandboxId:2cb5555463cac109706e0731e5ef4c0e985f0742f83e128a133f9189fe1ee90f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698175488609304077,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.kubernetes.
container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=36aabc08-ea8f-4cc0-97d7-9e53cfb25e4d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.955982367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=231e9266-8e59-4d99-a1a3-580a95d84d75 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.956126458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=231e9266-8e59-4d99-a1a3-580a95d84d75 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.957473263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4e48c496-f097-4d28-93d7-27bf6da3aa6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.957927209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698175567957913492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4e48c496-f097-4d28-93d7-27bf6da3aa6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.958514069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e4da27c-08b6-4118-b833-111a4694d470 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.958585331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e4da27c-08b6-4118-b833-111a4694d470 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:26:07 multinode-632589 crio[703]: time="2023-10-24 19:26:07.958870113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601072940a7880fdfb5fcbddd384fc7dcd73c5a0861d725c6c9d1fc036a33ee9,PodSandboxId:2ca6c566518b6081df0bb7caf0655ca032095cdaae753c7fc2c882725603e9ec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698175563870243927,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd2b2ba21a5fe632ce90e990c920b9581903ca068b458c20cbdd84cdcde68f,PodSandboxId:a8cfeb977e7a154c1ff0588feb2f115a3d1cb06dd0221b13c8aeda5add01b706,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698175517779415430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96453acfe6a36f93a8a8b2f539c808aa48ae923768ee2a237b9007fd92e1374f,PodSandboxId:2c9a63de8b58b003c0da744330d9d96c33aaec0bf51a50b8333c3509326039f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698175517556008960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ad495d5fbfcb2e80e1521c1b7eb556b65885ea0fa2616ee4f922bbbb380a48,PodSandboxId:aa40f5338484e55a8b52dba1cc0e8d04d370bb7cedf82626e81cb7968584b596,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698175514753864700,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c4eeb3ad883bdf9490a5f37ef0d81539041e67e4e986bb5750dce8c088ed03,PodSandboxId:21b512ab863a81b159a18e330484f115c323ef0c3c7798dc08297ce60746805c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698175512457235601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a583da9294cb4fdec84185ceaceb5e1357e9b7362d7bab82f219690e2c8f2d98,PodSandboxId:5e3710cccd08ffdbf8160f8638163f930902be4039ba749e039c66d66943fb22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698175489335339840,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a436bf8decce624ff573fb3778d6750ddd441dfd642b30a481b05f0c7a5ac565,PodSandboxId:fd112246a92ecbf23f66a4388aa945aa0515c0cc91c670180368824f6846cde9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698175488914859843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.h
ash: c1b3fcc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9,PodSandboxId:0f073184917249ca980d2a008e4284c03197bab1a5b853b27360f4e6a4f865d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698175488655535042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.container.hash: 2e7911d
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15eccdcf8f718a5f523ad290afa414923d4bc3821b6c435171605b3dae31657,PodSandboxId:2cb5555463cac109706e0731e5ef4c0e985f0742f83e128a133f9189fe1ee90f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698175488609304077,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.kubernetes.
container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e4da27c-08b6-4118-b833-111a4694d470 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	601072940a788       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   2ca6c566518b6       busybox-5bc68d56bd-ddcjz
	60fd2b2ba21a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      50 seconds ago       Running             coredns                   0                   a8cfeb977e7a1       coredns-5dd5756b68-c5l8s
	96453acfe6a36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      50 seconds ago       Running             storage-provisioner       0                   2c9a63de8b58b       storage-provisioner
	63ad495d5fbfc       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      53 seconds ago       Running             kindnet-cni               0                   aa40f5338484e       kindnet-xh444
	07c4eeb3ad883       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      55 seconds ago       Running             kube-proxy                0                   21b512ab863a8       kube-proxy-gd49s
	a583da9294cb4       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   5e3710cccd08f       kube-scheduler-multinode-632589
	a436bf8decce6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   fd112246a92ec       etcd-multinode-632589
	48cb7643f21dd       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   0f07318491724       kube-apiserver-multinode-632589
	b15eccdcf8f71       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   2cb5555463cac       kube-controller-manager-multinode-632589
	
	* 
	* ==> coredns [60fd2b2ba21a5fe632ce90e990c920b9581903ca068b458c20cbdd84cdcde68f] <==
	* [INFO] 10.244.0.3:34353 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006583s
	[INFO] 10.244.1.2:40509 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013986s
	[INFO] 10.244.1.2:33726 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001832193s
	[INFO] 10.244.1.2:57941 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108662s
	[INFO] 10.244.1.2:43449 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080862s
	[INFO] 10.244.1.2:34062 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001256574s
	[INFO] 10.244.1.2:48120 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127659s
	[INFO] 10.244.1.2:58344 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077822s
	[INFO] 10.244.1.2:53810 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088653s
	[INFO] 10.244.0.3:57142 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117935s
	[INFO] 10.244.0.3:32860 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083417s
	[INFO] 10.244.0.3:55193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114227s
	[INFO] 10.244.0.3:43930 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000228191s
	[INFO] 10.244.1.2:43361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148657s
	[INFO] 10.244.1.2:37677 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134939s
	[INFO] 10.244.1.2:52223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104902s
	[INFO] 10.244.1.2:57731 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000128908s
	[INFO] 10.244.0.3:35877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162728s
	[INFO] 10.244.0.3:51137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129678s
	[INFO] 10.244.0.3:58427 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117476s
	[INFO] 10.244.0.3:58900 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000171358s
	[INFO] 10.244.1.2:42262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198832s
	[INFO] 10.244.1.2:41996 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000198956s
	[INFO] 10.244.1.2:45227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104078s
	[INFO] 10.244.1.2:34338 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088942s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-632589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-632589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=multinode-632589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_24_57_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:24:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-632589
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:26:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:25:16 +0000   Tue, 24 Oct 2023 19:24:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:25:16 +0000   Tue, 24 Oct 2023 19:24:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:25:16 +0000   Tue, 24 Oct 2023 19:24:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:25:16 +0000   Tue, 24 Oct 2023 19:25:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    multinode-632589
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7a2a529e06345baafa6e4c8e4cddc27
	  System UUID:                c7a2a529-e063-45ba-afa6-e4c8e4cddc27
	  Boot ID:                    b6c4bc05-138e-487b-9708-70704f498ac5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ddcjz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-c5l8s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-multinode-632589                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-xh444                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      58s
	  kube-system                 kube-apiserver-multinode-632589             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-multinode-632589    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-gd49s                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-multinode-632589             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-632589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-632589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node multinode-632589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node multinode-632589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node multinode-632589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node multinode-632589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                node-controller  Node multinode-632589 event: Registered Node multinode-632589 in Controller
	  Normal  NodeReady                52s                kubelet          Node multinode-632589 status is now: NodeReady
	
	
	Name:               multinode-632589-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-632589-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:25:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-632589-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:26:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:25:59 +0000   Tue, 24 Oct 2023 19:25:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:25:59 +0000   Tue, 24 Oct 2023 19:25:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:25:59 +0000   Tue, 24 Oct 2023 19:25:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:25:59 +0000   Tue, 24 Oct 2023 19:25:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-632589-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 181b01b743a5434eb054a2d660635f48
	  System UUID:                181b01b7-43a5-434e-b054-a2d660635f48
	  Boot ID:                    6dad4c62-d4a1-4d68-aeb0-c7048487d91a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wrmmm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-qvkwv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-6vn7s            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 18s)  kubelet          Node multinode-632589-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 18s)  kubelet          Node multinode-632589-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 18s)  kubelet          Node multinode-632589-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node multinode-632589-m02 event: Registered Node multinode-632589-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-632589-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct24 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067255] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.323888] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.344491] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151920] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.943587] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.928670] systemd-fstab-generator[629]: Ignoring "noauto" for root device
	[  +0.106014] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.140377] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.103068] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.201588] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +9.219621] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[  +8.738003] systemd-fstab-generator[1242]: Ignoring "noauto" for root device
	[Oct24 19:25] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [a436bf8decce624ff573fb3778d6750ddd441dfd642b30a481b05f0c7a5ac565] <==
	* {"level":"info","ts":"2023-10-24T19:24:51.631025Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:24:51.631433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fda2fc0436a8884","local-member-id":"b60ca5935c0b4769","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:51.631537Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:51.631584Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:51.631824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:24:51.631861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:24:51.631871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:24:51.632673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:24:51.634033Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.247:2379"}
	{"level":"info","ts":"2023-10-24T19:25:10.708762Z","caller":"traceutil/trace.go:171","msg":"trace[1821044862] linearizableReadLoop","detail":"{readStateIndex:405; appliedIndex:404; }","duration":"130.276864ms","start":"2023-10-24T19:25:10.578472Z","end":"2023-10-24T19:25:10.708749Z","steps":["trace[1821044862] 'read index received'  (duration: 129.736601ms)","trace[1821044862] 'applied index is now lower than readState.Index'  (duration: 539.535µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:25:10.709056Z","caller":"traceutil/trace.go:171","msg":"trace[511334760] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"145.961566ms","start":"2023-10-24T19:25:10.563084Z","end":"2023-10-24T19:25:10.709046Z","steps":["trace[511334760] 'process raft request'  (duration: 145.360857ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:25:10.709226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.76432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2023-10-24T19:25:10.709276Z","caller":"traceutil/trace.go:171","msg":"trace[538649669] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:391; }","duration":"130.828359ms","start":"2023-10-24T19:25:10.578442Z","end":"2023-10-24T19:25:10.70927Z","steps":["trace[538649669] 'agreement among raft nodes before linearized reading'  (duration: 130.74405ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:25:10.709417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.000604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-632589\" ","response":"range_response_count:1 size:5617"}
	{"level":"info","ts":"2023-10-24T19:25:10.709455Z","caller":"traceutil/trace.go:171","msg":"trace[803364036] range","detail":"{range_begin:/registry/minions/multinode-632589; range_end:; response_count:1; response_revision:391; }","duration":"112.03927ms","start":"2023-10-24T19:25:10.59741Z","end":"2023-10-24T19:25:10.709449Z","steps":["trace[803364036] 'agreement among raft nodes before linearized reading'  (duration: 111.982181ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:25:10.698607Z","caller":"traceutil/trace.go:171","msg":"trace[1279542468] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"143.965779ms","start":"2023-10-24T19:25:10.554557Z","end":"2023-10-24T19:25:10.698523Z","steps":["trace[1279542468] 'process raft request'  (duration: 143.836242ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:25:54.377033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.60256ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5145797307196418795 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-shalyhpxrsyje7h43ueaqave5m\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-shalyhpxrsyje7h43ueaqave5m\" value_size:607 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-shalyhpxrsyje7h43ueaqave5m\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-24T19:25:54.377273Z","caller":"traceutil/trace.go:171","msg":"trace[570800925] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:518; }","duration":"110.852137ms","start":"2023-10-24T19:25:54.266402Z","end":"2023-10-24T19:25:54.377254Z","steps":["trace[570800925] 'read index received'  (duration: 5.91378ms)","trace[570800925] 'applied index is now lower than readState.Index'  (duration: 104.937597ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:25:54.377357Z","caller":"traceutil/trace.go:171","msg":"trace[1416189756] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"270.313604ms","start":"2023-10-24T19:25:54.107037Z","end":"2023-10-24T19:25:54.377351Z","steps":["trace[1416189756] 'process raft request'  (duration: 165.272339ms)","trace[1416189756] 'compare'  (duration: 103.380188ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:25:54.377545Z","caller":"traceutil/trace.go:171","msg":"trace[758073014] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"152.534794ms","start":"2023-10-24T19:25:54.225003Z","end":"2023-10-24T19:25:54.377537Z","steps":["trace[758073014] 'process raft request'  (duration: 152.20823ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:25:54.377807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.44545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:25:54.377868Z","caller":"traceutil/trace.go:171","msg":"trace[1366788318] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:495; }","duration":"111.508178ms","start":"2023-10-24T19:25:54.266348Z","end":"2023-10-24T19:25:54.377856Z","steps":["trace[1366788318] 'agreement among raft nodes before linearized reading'  (duration: 111.311515ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:25:54.42546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.345336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-632589-m02\" ","response":"range_response_count:1 size:2530"}
	{"level":"info","ts":"2023-10-24T19:25:54.425531Z","caller":"traceutil/trace.go:171","msg":"trace[608052184] range","detail":"{range_begin:/registry/minions/multinode-632589-m02; range_end:; response_count:1; response_revision:496; }","duration":"140.430808ms","start":"2023-10-24T19:25:54.285092Z","end":"2023-10-24T19:25:54.425523Z","steps":["trace[608052184] 'agreement among raft nodes before linearized reading'  (duration: 140.30312ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:25:54.425499Z","caller":"traceutil/trace.go:171","msg":"trace[1976212304] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"107.855758ms","start":"2023-10-24T19:25:54.317626Z","end":"2023-10-24T19:25:54.425482Z","steps":["trace[1976212304] 'process raft request'  (duration: 107.66705ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:26:08 up 1 min,  0 users,  load average: 0.77, 0.37, 0.14
	Linux multinode-632589 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [63ad495d5fbfcb2e80e1521c1b7eb556b65885ea0fa2616ee4f922bbbb380a48] <==
	* I1024 19:25:15.606924       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:25:15.607089       1 main.go:107] hostIP = 192.168.39.247
	podIP = 192.168.39.247
	I1024 19:25:15.607345       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:25:15.607392       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:25:15.607423       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:25:16.201191       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:25:16.201307       1 main.go:227] handling current node
	I1024 19:25:26.208684       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:25:26.209071       1 main.go:227] handling current node
	I1024 19:25:36.223070       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:25:36.223170       1 main.go:227] handling current node
	I1024 19:25:46.243392       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:25:46.243512       1 main.go:227] handling current node
	I1024 19:25:56.258398       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:25:56.258454       1 main.go:227] handling current node
	I1024 19:25:56.258483       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I1024 19:25:56.258489       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	I1024 19:25:56.258752       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.186 Flags: [] Table: 0} 
	I1024 19:26:06.266194       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:26:06.266246       1 main.go:227] handling current node
	I1024 19:26:06.266258       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I1024 19:26:06.266267       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9] <==
	* I1024 19:24:53.049224       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:24:53.050215       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 19:24:53.054324       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 19:24:53.056606       1 aggregator.go:166] initial CRD sync complete...
	I1024 19:24:53.056655       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 19:24:53.056678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 19:24:53.056829       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:24:53.069014       1 controller.go:624] quota admission added evaluator for: namespaces
	I1024 19:24:53.125065       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:24:53.125292       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 19:24:53.962086       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1024 19:24:53.973346       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1024 19:24:53.974471       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:24:54.529331       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:24:54.590065       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1024 19:24:54.702390       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1024 19:24:54.712382       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.247]
	I1024 19:24:54.713322       1 controller.go:624] quota admission added evaluator for: endpoints
	I1024 19:24:54.717563       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:24:54.997992       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 19:24:56.111913       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 19:24:56.139378       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1024 19:24:56.149588       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 19:25:09.067684       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1024 19:25:10.152237       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [b15eccdcf8f718a5f523ad290afa414923d4bc3821b6c435171605b3dae31657] <==
	* I1024 19:25:10.741286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.248µs"
	I1024 19:25:16.737804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.683µs"
	I1024 19:25:16.772945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="678.945µs"
	I1024 19:25:18.403677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.428194ms"
	I1024 19:25:18.404113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="132.83µs"
	I1024 19:25:19.217313       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1024 19:25:51.662352       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-632589-m02\" does not exist"
	I1024 19:25:51.683112       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-632589-m02" podCIDRs=["10.244.1.0/24"]
	I1024 19:25:51.694956       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qvkwv"
	I1024 19:25:51.699892       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6vn7s"
	I1024 19:25:54.223076       1 event.go:307] "Event occurred" object="multinode-632589-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-632589-m02 event: Registered Node multinode-632589-m02 in Controller"
	I1024 19:25:54.223423       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-632589-m02"
	I1024 19:25:59.526141       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m02"
	I1024 19:26:01.999451       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1024 19:26:02.016046       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wrmmm"
	I1024 19:26:02.032043       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-ddcjz"
	I1024 19:26:02.046309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.162446ms"
	I1024 19:26:02.082385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.96532ms"
	I1024 19:26:02.082507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="66.994µs"
	I1024 19:26:02.086429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.464µs"
	I1024 19:26:04.242278       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wrmmm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wrmmm"
	I1024 19:26:04.364789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.564456ms"
	I1024 19:26:04.364927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.64µs"
	I1024 19:26:04.542624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.128663ms"
	I1024 19:26:04.542943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.331µs"
	
	* 
	* ==> kube-proxy [07c4eeb3ad883bdf9490a5f37ef0d81539041e67e4e986bb5750dce8c088ed03] <==
	* I1024 19:25:12.624073       1 server_others.go:69] "Using iptables proxy"
	I1024 19:25:12.637061       1 node.go:141] Successfully retrieved node IP: 192.168.39.247
	I1024 19:25:12.694520       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 19:25:12.694608       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 19:25:12.699066       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:25:12.699269       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:25:12.699509       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:25:12.699831       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:25:12.700927       1 config.go:188] "Starting service config controller"
	I1024 19:25:12.701002       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:25:12.701090       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:25:12.701147       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:25:12.702200       1 config.go:315] "Starting node config controller"
	I1024 19:25:12.702327       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:25:12.801686       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:25:12.801828       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:25:12.802526       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a583da9294cb4fdec84185ceaceb5e1357e9b7362d7bab82f219690e2c8f2d98] <==
	* W1024 19:24:53.073844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:24:53.073942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:24:53.074350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:24:53.075078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1024 19:24:53.075300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1024 19:24:53.075365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1024 19:24:53.075687       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:24:53.075815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:24:53.890411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:24:53.890485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:24:53.893032       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:24:53.893101       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:24:54.015984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:24:54.016076       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:24:54.051507       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:24:54.051555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1024 19:24:54.073953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:24:54.073973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1024 19:24:54.193903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:24:54.193925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1024 19:24:54.249287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:24:54.249344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1024 19:24:54.284436       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:24:54.284485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1024 19:24:56.941217       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:24:23 UTC, ends at Tue 2023-10-24 19:26:08 UTC. --
	Oct 24 19:25:10 multinode-632589 kubelet[1249]: I1024 19:25:10.426118    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wcbp\" (UniqueName: \"kubernetes.io/projected/a1c573fd-3f4b-4d90-a366-6d859a121185-kube-api-access-2wcbp\") pod \"kube-proxy-gd49s\" (UID: \"a1c573fd-3f4b-4d90-a366-6d859a121185\") " pod="kube-system/kube-proxy-gd49s"
	Oct 24 19:25:10 multinode-632589 kubelet[1249]: I1024 19:25:10.426140    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1c573fd-3f4b-4d90-a366-6d859a121185-kube-proxy\") pod \"kube-proxy-gd49s\" (UID: \"a1c573fd-3f4b-4d90-a366-6d859a121185\") " pod="kube-system/kube-proxy-gd49s"
	Oct 24 19:25:10 multinode-632589 kubelet[1249]: I1024 19:25:10.527250    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b-cni-cfg\") pod \"kindnet-xh444\" (UID: \"dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b\") " pod="kube-system/kindnet-xh444"
	Oct 24 19:25:10 multinode-632589 kubelet[1249]: I1024 19:25:10.527312    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd9sl\" (UniqueName: \"kubernetes.io/projected/dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b-kube-api-access-bd9sl\") pod \"kindnet-xh444\" (UID: \"dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b\") " pod="kube-system/kindnet-xh444"
	Oct 24 19:25:10 multinode-632589 kubelet[1249]: I1024 19:25:10.527344    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b-lib-modules\") pod \"kindnet-xh444\" (UID: \"dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b\") " pod="kube-system/kindnet-xh444"
	Oct 24 19:25:10 multinode-632589 kubelet[1249]: I1024 19:25:10.527362    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b-xtables-lock\") pod \"kindnet-xh444\" (UID: \"dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b\") " pod="kube-system/kindnet-xh444"
	Oct 24 19:25:11 multinode-632589 kubelet[1249]: E1024 19:25:11.528527    1249 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 24 19:25:11 multinode-632589 kubelet[1249]: E1024 19:25:11.528812    1249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1c573fd-3f4b-4d90-a366-6d859a121185-kube-proxy podName:a1c573fd-3f4b-4d90-a366-6d859a121185 nodeName:}" failed. No retries permitted until 2023-10-24 19:25:12.028623477 +0000 UTC m=+15.952162619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a1c573fd-3f4b-4d90-a366-6d859a121185-kube-proxy") pod "kube-proxy-gd49s" (UID: "a1c573fd-3f4b-4d90-a366-6d859a121185") : failed to sync configmap cache: timed out waiting for the condition
	Oct 24 19:25:13 multinode-632589 kubelet[1249]: I1024 19:25:13.360165    1249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gd49s" podStartSLOduration=3.360121933 podCreationTimestamp="2023-10-24 19:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:25:13.356930198 +0000 UTC m=+17.280469359" watchObservedRunningTime="2023-10-24 19:25:13.360121933 +0000 UTC m=+17.283661094"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.680318    1249 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.724053    1249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xh444" podStartSLOduration=6.724006498 podCreationTimestamp="2023-10-24 19:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:25:15.362678579 +0000 UTC m=+19.286217740" watchObservedRunningTime="2023-10-24 19:25:16.724006498 +0000 UTC m=+20.647545658"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.724196    1249 topology_manager.go:215] "Topology Admit Handler" podUID="4023756b-6e38-476d-8dec-90ea2346dc01" podNamespace="kube-system" podName="storage-provisioner"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.726361    1249 topology_manager.go:215] "Topology Admit Handler" podUID="20aa782d-e6ed-45ad-b625-556d1a8503c0" podNamespace="kube-system" podName="coredns-5dd5756b68-c5l8s"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.870670    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmmj\" (UniqueName: \"kubernetes.io/projected/20aa782d-e6ed-45ad-b625-556d1a8503c0-kube-api-access-2jmmj\") pod \"coredns-5dd5756b68-c5l8s\" (UID: \"20aa782d-e6ed-45ad-b625-556d1a8503c0\") " pod="kube-system/coredns-5dd5756b68-c5l8s"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.870837    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtm62\" (UniqueName: \"kubernetes.io/projected/4023756b-6e38-476d-8dec-90ea2346dc01-kube-api-access-qtm62\") pod \"storage-provisioner\" (UID: \"4023756b-6e38-476d-8dec-90ea2346dc01\") " pod="kube-system/storage-provisioner"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.870944    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20aa782d-e6ed-45ad-b625-556d1a8503c0-config-volume\") pod \"coredns-5dd5756b68-c5l8s\" (UID: \"20aa782d-e6ed-45ad-b625-556d1a8503c0\") " pod="kube-system/coredns-5dd5756b68-c5l8s"
	Oct 24 19:25:16 multinode-632589 kubelet[1249]: I1024 19:25:16.870990    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4023756b-6e38-476d-8dec-90ea2346dc01-tmp\") pod \"storage-provisioner\" (UID: \"4023756b-6e38-476d-8dec-90ea2346dc01\") " pod="kube-system/storage-provisioner"
	Oct 24 19:25:18 multinode-632589 kubelet[1249]: I1024 19:25:18.388610    1249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.388573829 podCreationTimestamp="2023-10-24 19:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:25:18.374349598 +0000 UTC m=+22.297888758" watchObservedRunningTime="2023-10-24 19:25:18.388573829 +0000 UTC m=+22.312112986"
	Oct 24 19:25:56 multinode-632589 kubelet[1249]: E1024 19:25:56.271071    1249 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 19:25:56 multinode-632589 kubelet[1249]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 19:25:56 multinode-632589 kubelet[1249]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 19:25:56 multinode-632589 kubelet[1249]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 19:26:02 multinode-632589 kubelet[1249]: I1024 19:26:02.053799    1249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c5l8s" podStartSLOduration=53.053634833 podCreationTimestamp="2023-10-24 19:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:25:18.38917504 +0000 UTC m=+22.312714201" watchObservedRunningTime="2023-10-24 19:26:02.053634833 +0000 UTC m=+65.977173996"
	Oct 24 19:26:02 multinode-632589 kubelet[1249]: I1024 19:26:02.054313    1249 topology_manager.go:215] "Topology Admit Handler" podUID="5b81ca1e-f022-4358-be2c-27042e8503c1" podNamespace="default" podName="busybox-5bc68d56bd-ddcjz"
	Oct 24 19:26:02 multinode-632589 kubelet[1249]: I1024 19:26:02.108298    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmc5g\" (UniqueName: \"kubernetes.io/projected/5b81ca1e-f022-4358-be2c-27042e8503c1-kube-api-access-kmc5g\") pod \"busybox-5bc68d56bd-ddcjz\" (UID: \"5b81ca1e-f022-4358-be2c-27042e8503c1\") " pod="default/busybox-5bc68d56bd-ddcjz"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-632589 -n multinode-632589
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-632589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (689.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-632589
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-632589
E1024 19:28:10.558537   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:28:19.104671   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-632589: exit status 82 (2m0.947367001s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-632589"  ...
	* Stopping node "multinode-632589"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-632589" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-632589 --wait=true -v=8 --alsologtostderr
E1024 19:29:42.152798   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:31:00.584525   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:33:10.559092   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:33:19.104499   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:34:33.603827   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:36:00.584600   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:37:23.629769   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:38:10.558395   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:38:19.103894   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-632589 --wait=true -v=8 --alsologtostderr: (9m25.554394257s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-632589
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-632589 -n multinode-632589
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-632589 logs -n 25: (1.528137228s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | multinode-632589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m02:/home/docker/cp-test.txt                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3245783295/001/cp-test_multinode-632589-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | multinode-632589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m02:/home/docker/cp-test.txt                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | multinode-632589:/home/docker/cp-test_multinode-632589-m02_multinode-632589.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | multinode-632589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n multinode-632589 sudo cat                                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | /home/docker/cp-test_multinode-632589-m02_multinode-632589.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m02:/home/docker/cp-test.txt                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | multinode-632589-m03:/home/docker/cp-test_multinode-632589-m02_multinode-632589-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | multinode-632589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n multinode-632589-m03 sudo cat                                   | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | /home/docker/cp-test_multinode-632589-m02_multinode-632589-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp testdata/cp-test.txt                                                | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3245783295/001/cp-test_multinode-632589-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589:/home/docker/cp-test_multinode-632589-m03_multinode-632589.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n multinode-632589 sudo cat                                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | /home/docker/cp-test_multinode-632589-m03_multinode-632589.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt                       | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m02:/home/docker/cp-test_multinode-632589-m03_multinode-632589-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n multinode-632589-m02 sudo cat                                   | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | /home/docker/cp-test_multinode-632589-m03_multinode-632589-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-632589 node stop m03                                                          | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	| node    | multinode-632589 node start                                                             | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-632589                                                                | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC |                     |
	| stop    | -p multinode-632589                                                                     | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC |                     |
	| start   | -p multinode-632589                                                                     | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:29 UTC | 24 Oct 23 19:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-632589                                                                | multinode-632589 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:29:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:29:34.936689   33086 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:29:34.936808   33086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:29:34.936829   33086 out.go:309] Setting ErrFile to fd 2...
	I1024 19:29:34.936835   33086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:29:34.937036   33086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:29:34.937593   33086 out.go:303] Setting JSON to false
	I1024 19:29:34.938508   33086 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4073,"bootTime":1698171702,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:29:34.938568   33086 start.go:138] virtualization: kvm guest
	I1024 19:29:34.940908   33086 out.go:177] * [multinode-632589] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:29:34.942757   33086 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:29:34.942779   33086 notify.go:220] Checking for updates...
	I1024 19:29:34.944160   33086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:29:34.945554   33086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:29:34.947053   33086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:29:34.948538   33086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:29:34.949990   33086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:29:34.951926   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:29:34.952026   33086 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:29:34.952436   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:29:34.952493   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:29:34.966752   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I1024 19:29:34.967264   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:29:34.967760   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:29:34.967779   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:29:34.968094   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:29:34.968274   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:29:35.004527   33086 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:29:35.006037   33086 start.go:298] selected driver: kvm2
	I1024 19:29:35.006053   33086 start.go:902] validating driver "kvm2" against &{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:29:35.006195   33086 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:29:35.006609   33086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:29:35.006699   33086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:29:35.021079   33086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:29:35.021781   33086 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:29:35.021858   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:29:35.021872   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:29:35.021885   33086 start_flags.go:323] config:
	{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:29:35.022153   33086 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:29:35.023986   33086 out.go:177] * Starting control plane node multinode-632589 in cluster multinode-632589
	I1024 19:29:35.025275   33086 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:29:35.025338   33086 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:29:35.025351   33086 cache.go:57] Caching tarball of preloaded images
	I1024 19:29:35.025436   33086 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:29:35.025447   33086 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:29:35.025571   33086 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:29:35.025754   33086 start.go:365] acquiring machines lock for multinode-632589: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:29:35.025801   33086 start.go:369] acquired machines lock for "multinode-632589" in 26.14µs
	I1024 19:29:35.025815   33086 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:29:35.025826   33086 fix.go:54] fixHost starting: 
	I1024 19:29:35.026061   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:29:35.026090   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:29:35.039809   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40061
	I1024 19:29:35.040232   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:29:35.040769   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:29:35.040799   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:29:35.042316   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:29:35.042522   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:29:35.042648   33086 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:29:35.044400   33086 fix.go:102] recreateIfNeeded on multinode-632589: state=Running err=<nil>
	W1024 19:29:35.044442   33086 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:29:35.046485   33086 out.go:177] * Updating the running kvm2 "multinode-632589" VM ...
	I1024 19:29:35.047897   33086 machine.go:88] provisioning docker machine ...
	I1024 19:29:35.047914   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:29:35.048109   33086 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:29:35.048301   33086 buildroot.go:166] provisioning hostname "multinode-632589"
	I1024 19:29:35.048313   33086 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:29:35.048450   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:29:35.051182   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:29:35.051642   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:29:35.051669   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:29:35.051815   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:29:35.052012   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:29:35.052146   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:29:35.052269   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:29:35.052393   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:29:35.052725   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:29:35.052741   33086 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-632589 && echo "multinode-632589" | sudo tee /etc/hostname
	I1024 19:29:53.433559   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:29:59.513554   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:02.585663   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:08.665599   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:11.737553   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:17.817534   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:20.889596   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:26.969572   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:30.041570   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:36.121563   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:39.193515   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:45.273754   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:48.345518   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:54.425565   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:30:57.497533   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:03.577579   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:06.649560   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:12.729530   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:15.801583   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:21.881615   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:24.953551   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:31.033584   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:34.105580   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:40.185555   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:43.257649   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:49.337563   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:52.409533   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:31:58.489593   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:01.561518   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:07.641643   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:10.713581   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:16.793534   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:19.865536   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:25.945656   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:29.017538   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:35.097584   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:38.169498   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:44.249584   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:47.321550   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:53.401567   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:32:56.473647   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:02.553551   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:05.625528   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:11.705613   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:14.777577   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:20.857594   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:23.929613   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:30.009534   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:33.081528   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:39.161605   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:42.233612   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:48.313576   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:51.385591   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:33:57.465542   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:00.537617   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:06.617581   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:09.689529   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:15.769560   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:18.841598   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:24.921575   33086 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.247:22: connect: no route to host
	I1024 19:34:27.923924   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:34:27.923969   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:27.925981   33086 machine.go:91] provisioned docker machine in 4m52.878070098s
	I1024 19:34:27.926016   33086 fix.go:56] fixHost completed within 4m52.900194162s
	I1024 19:34:27.926021   33086 start.go:83] releasing machines lock for "multinode-632589", held for 4m52.900211741s
	W1024 19:34:27.926040   33086 start.go:691] error starting host: provision: host is not running
	W1024 19:34:27.926120   33086 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1024 19:34:27.926133   33086 start.go:706] Will try again in 5 seconds ...
	I1024 19:34:32.926454   33086 start.go:365] acquiring machines lock for multinode-632589: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:34:32.926568   33086 start.go:369] acquired machines lock for "multinode-632589" in 78.861µs
	I1024 19:34:32.926594   33086 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:34:32.926601   33086 fix.go:54] fixHost starting: 
	I1024 19:34:32.926869   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:34:32.926896   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:34:32.940928   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I1024 19:34:32.941372   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:34:32.941815   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:34:32.941833   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:34:32.942138   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:34:32.942292   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:32.942399   33086 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:34:32.944146   33086 fix.go:102] recreateIfNeeded on multinode-632589: state=Stopped err=<nil>
	I1024 19:34:32.944167   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	W1024 19:34:32.944273   33086 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:34:32.946296   33086 out.go:177] * Restarting existing kvm2 VM for "multinode-632589" ...
	I1024 19:34:32.947698   33086 main.go:141] libmachine: (multinode-632589) Calling .Start
	I1024 19:34:32.947865   33086 main.go:141] libmachine: (multinode-632589) Ensuring networks are active...
	I1024 19:34:32.948565   33086 main.go:141] libmachine: (multinode-632589) Ensuring network default is active
	I1024 19:34:32.948867   33086 main.go:141] libmachine: (multinode-632589) Ensuring network mk-multinode-632589 is active
	I1024 19:34:32.949266   33086 main.go:141] libmachine: (multinode-632589) Getting domain xml...
	I1024 19:34:32.949998   33086 main.go:141] libmachine: (multinode-632589) Creating domain...
	I1024 19:34:34.162499   33086 main.go:141] libmachine: (multinode-632589) Waiting to get IP...
	I1024 19:34:34.163314   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:34.163728   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:34.163800   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:34.163708   33863 retry.go:31] will retry after 271.489594ms: waiting for machine to come up
	I1024 19:34:34.437212   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:34.437715   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:34.437746   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:34.437688   33863 retry.go:31] will retry after 241.086849ms: waiting for machine to come up
	I1024 19:34:34.680050   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:34.680624   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:34.680647   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:34.680559   33863 retry.go:31] will retry after 453.394222ms: waiting for machine to come up
	I1024 19:34:35.134966   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:35.135381   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:35.135410   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:35.135337   33863 retry.go:31] will retry after 583.003542ms: waiting for machine to come up
	I1024 19:34:35.719972   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:35.720497   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:35.720539   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:35.720469   33863 retry.go:31] will retry after 509.331681ms: waiting for machine to come up
	I1024 19:34:36.231061   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:36.231543   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:36.231573   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:36.231480   33863 retry.go:31] will retry after 671.304436ms: waiting for machine to come up
	I1024 19:34:36.904383   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:36.904785   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:36.904828   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:36.904717   33863 retry.go:31] will retry after 812.885897ms: waiting for machine to come up
	I1024 19:34:37.719653   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:37.720107   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:37.720159   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:37.720058   33863 retry.go:31] will retry after 900.169417ms: waiting for machine to come up
	I1024 19:34:38.621249   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:38.621771   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:38.621802   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:38.621722   33863 retry.go:31] will retry after 1.716200652s: waiting for machine to come up
	I1024 19:34:40.340551   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:40.340976   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:40.341006   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:40.340930   33863 retry.go:31] will retry after 1.411866891s: waiting for machine to come up
	I1024 19:34:41.754686   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:41.755194   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:41.755245   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:41.755185   33863 retry.go:31] will retry after 2.001448754s: waiting for machine to come up
	I1024 19:34:43.758260   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:43.758849   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:43.758885   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:43.758789   33863 retry.go:31] will retry after 2.469135086s: waiting for machine to come up
	I1024 19:34:46.231578   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:46.232052   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:46.232081   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:46.232012   33863 retry.go:31] will retry after 3.415474449s: waiting for machine to come up
	I1024 19:34:49.649679   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:49.650086   33086 main.go:141] libmachine: (multinode-632589) DBG | unable to find current IP address of domain multinode-632589 in network mk-multinode-632589
	I1024 19:34:49.650107   33086 main.go:141] libmachine: (multinode-632589) DBG | I1024 19:34:49.650048   33863 retry.go:31] will retry after 4.942053181s: waiting for machine to come up
	I1024 19:34:54.594417   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.594867   33086 main.go:141] libmachine: (multinode-632589) Found IP for machine: 192.168.39.247
	I1024 19:34:54.594901   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has current primary IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.594915   33086 main.go:141] libmachine: (multinode-632589) Reserving static IP address...
	I1024 19:34:54.595299   33086 main.go:141] libmachine: (multinode-632589) Reserved static IP address: 192.168.39.247
	I1024 19:34:54.595333   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "multinode-632589", mac: "52:54:00:9a:c3:34", ip: "192.168.39.247"} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.595364   33086 main.go:141] libmachine: (multinode-632589) Waiting for SSH to be available...
	I1024 19:34:54.595395   33086 main.go:141] libmachine: (multinode-632589) DBG | skip adding static IP to network mk-multinode-632589 - found existing host DHCP lease matching {name: "multinode-632589", mac: "52:54:00:9a:c3:34", ip: "192.168.39.247"}
	I1024 19:34:54.595417   33086 main.go:141] libmachine: (multinode-632589) DBG | Getting to WaitForSSH function...
	I1024 19:34:54.597460   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.597833   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.597864   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.598008   33086 main.go:141] libmachine: (multinode-632589) DBG | Using SSH client type: external
	I1024 19:34:54.598033   33086 main.go:141] libmachine: (multinode-632589) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa (-rw-------)
	I1024 19:34:54.598070   33086 main.go:141] libmachine: (multinode-632589) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:34:54.598087   33086 main.go:141] libmachine: (multinode-632589) DBG | About to run SSH command:
	I1024 19:34:54.598099   33086 main.go:141] libmachine: (multinode-632589) DBG | exit 0
	I1024 19:34:54.688963   33086 main.go:141] libmachine: (multinode-632589) DBG | SSH cmd err, output: <nil>: 
	I1024 19:34:54.689312   33086 main.go:141] libmachine: (multinode-632589) Calling .GetConfigRaw
	I1024 19:34:54.690014   33086 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:34:54.692312   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.692686   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.692719   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.692915   33086 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:34:54.693080   33086 machine.go:88] provisioning docker machine ...
	I1024 19:34:54.693095   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:54.693329   33086 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:34:54.693494   33086 buildroot.go:166] provisioning hostname "multinode-632589"
	I1024 19:34:54.693508   33086 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:34:54.693671   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:54.695646   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.695970   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.696006   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.696114   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:54.696279   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:54.696399   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:54.696547   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:54.696694   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:54.697061   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:34:54.697075   33086 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-632589 && echo "multinode-632589" | sudo tee /etc/hostname
	I1024 19:34:54.825134   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-632589
	
	I1024 19:34:54.825160   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:54.827844   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.828173   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.828201   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.828375   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:54.828591   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:54.828713   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:54.828840   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:54.828973   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:54.829354   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:34:54.829383   33086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-632589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-632589/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-632589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:34:54.952997   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:34:54.953026   33086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:34:54.953062   33086 buildroot.go:174] setting up certificates
	I1024 19:34:54.953081   33086 provision.go:83] configureAuth start
	I1024 19:34:54.953097   33086 main.go:141] libmachine: (multinode-632589) Calling .GetMachineName
	I1024 19:34:54.953392   33086 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:34:54.955567   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.955890   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.955939   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.956098   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:54.958062   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.958408   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:54.958435   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:54.958542   33086 provision.go:138] copyHostCerts
	I1024 19:34:54.958580   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:34:54.958617   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:34:54.958628   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:34:54.958697   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:34:54.958780   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:34:54.958798   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:34:54.958805   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:34:54.958828   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:34:54.958881   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:34:54.958897   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:34:54.958903   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:34:54.958925   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:34:54.959005   33086 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.multinode-632589 san=[192.168.39.247 192.168.39.247 localhost 127.0.0.1 minikube multinode-632589]
	I1024 19:34:55.272532   33086 provision.go:172] copyRemoteCerts
	I1024 19:34:55.272591   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:34:55.272614   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:55.275076   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.275437   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.275471   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.275606   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:55.275781   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.275920   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:55.276062   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:34:55.363084   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:34:55.363138   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1024 19:34:55.384522   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:34:55.384568   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:34:55.404949   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:34:55.404993   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:34:55.425692   33086 provision.go:86] duration metric: configureAuth took 472.594897ms
	I1024 19:34:55.425717   33086 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:34:55.425973   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:34:55.426044   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:55.428680   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.429045   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.429076   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.429264   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:55.429470   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.429639   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.429772   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:55.429913   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:55.430297   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:34:55.430313   33086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:34:55.741699   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:34:55.741733   33086 machine.go:91] provisioned docker machine in 1.048641336s
	I1024 19:34:55.741743   33086 start.go:300] post-start starting for "multinode-632589" (driver="kvm2")
	I1024 19:34:55.741752   33086 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:34:55.741769   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:55.742091   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:34:55.742120   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:55.745115   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.745506   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.745553   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.745691   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:55.745897   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.746065   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:55.746209   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:34:55.830710   33086 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:34:55.834633   33086 command_runner.go:130] > NAME=Buildroot
	I1024 19:34:55.834648   33086 command_runner.go:130] > VERSION=2021.02.12-1-g71212f5-dirty
	I1024 19:34:55.834654   33086 command_runner.go:130] > ID=buildroot
	I1024 19:34:55.834662   33086 command_runner.go:130] > VERSION_ID=2021.02.12
	I1024 19:34:55.834671   33086 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1024 19:34:55.834756   33086 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:34:55.834780   33086 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:34:55.834853   33086 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:34:55.834925   33086 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:34:55.834935   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /etc/ssl/certs/162982.pem
	I1024 19:34:55.835011   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:34:55.842662   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:34:55.864842   33086 start.go:303] post-start completed in 123.087278ms
	I1024 19:34:55.864863   33086 fix.go:56] fixHost completed within 22.938260061s
	I1024 19:34:55.864886   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:55.867054   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.867336   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.867376   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.867521   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:55.867700   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.867857   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.868009   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:55.868155   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:55.868696   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I1024 19:34:55.868714   33086 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:34:55.985883   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698176095.971015568
	
	I1024 19:34:55.985907   33086 fix.go:206] guest clock: 1698176095.971015568
	I1024 19:34:55.985918   33086 fix.go:219] Guest: 2023-10-24 19:34:55.971015568 +0000 UTC Remote: 2023-10-24 19:34:55.86486734 +0000 UTC m=+320.978688329 (delta=106.148228ms)
	I1024 19:34:55.985955   33086 fix.go:190] guest clock delta is within tolerance: 106.148228ms
	I1024 19:34:55.985962   33086 start.go:83] releasing machines lock for "multinode-632589", held for 23.059379923s
	I1024 19:34:55.985992   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:55.986286   33086 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:34:55.988842   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.989180   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.989220   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.989326   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:55.989794   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:55.989993   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:34:55.990044   33086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:34:55.990097   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:55.990219   33086 ssh_runner.go:195] Run: cat /version.json
	I1024 19:34:55.990245   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:34:55.992892   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.993248   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.993292   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.993327   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.993460   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:55.993632   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.993792   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:55.993947   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:55.993959   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:34:55.993981   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:55.994057   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:34:55.994205   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:34:55.994339   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:34:55.994461   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:34:56.073715   33086 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1697451950-17434", "minikube_version": "v1.31.2", "commit": "141089eac34bd516aedd7845aa4003657eadd19b"}
	I1024 19:34:56.073891   33086 ssh_runner.go:195] Run: systemctl --version
	I1024 19:34:56.101589   33086 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:34:56.102637   33086 command_runner.go:130] > systemd 247 (247)
	I1024 19:34:56.102664   33086 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1024 19:34:56.102723   33086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:34:56.251347   33086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:34:56.257584   33086 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1024 19:34:56.258064   33086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:34:56.258128   33086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:34:56.272248   33086 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1024 19:34:56.272391   33086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:34:56.272405   33086 start.go:472] detecting cgroup driver to use...
	I1024 19:34:56.272472   33086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:34:56.285135   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:34:56.296615   33086 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:34:56.296686   33086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:34:56.308627   33086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:34:56.320926   33086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:34:56.420043   33086 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1024 19:34:56.420108   33086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:34:56.533415   33086 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 19:34:56.533448   33086 docker.go:214] disabling docker service ...
	I1024 19:34:56.533521   33086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:34:56.545563   33086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:34:56.555717   33086 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1024 19:34:56.556621   33086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:34:56.662212   33086 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 19:34:56.662277   33086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:34:56.678396   33086 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1024 19:34:56.678706   33086 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 19:34:56.764323   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:34:56.775863   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:34:56.791946   33086 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:34:56.792456   33086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:34:56.792518   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:34:56.801126   33086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:34:56.801171   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:34:56.809724   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:34:56.818542   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:34:56.827182   33086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:34:56.836313   33086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:34:56.843708   33086 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:34:56.843867   33086 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:34:56.843920   33086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:34:56.854845   33086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:34:56.863558   33086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:34:56.958792   33086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:34:57.109352   33086 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:34:57.109430   33086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:34:57.117896   33086 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:34:57.117917   33086 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:34:57.117927   33086 command_runner.go:130] > Device: 16h/22d	Inode: 797         Links: 1
	I1024 19:34:57.117941   33086 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:34:57.117952   33086 command_runner.go:130] > Access: 2023-10-24 19:34:57.080418395 +0000
	I1024 19:34:57.117969   33086 command_runner.go:130] > Modify: 2023-10-24 19:34:57.080418395 +0000
	I1024 19:34:57.117983   33086 command_runner.go:130] > Change: 2023-10-24 19:34:57.080418395 +0000
	I1024 19:34:57.117989   33086 command_runner.go:130] >  Birth: -
	I1024 19:34:57.118294   33086 start.go:540] Will wait 60s for crictl version
	I1024 19:34:57.118353   33086 ssh_runner.go:195] Run: which crictl
	I1024 19:34:57.122107   33086 command_runner.go:130] > /usr/bin/crictl
	I1024 19:34:57.122172   33086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:34:57.153938   33086 command_runner.go:130] > Version:  0.1.0
	I1024 19:34:57.153980   33086 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:34:57.153989   33086 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1024 19:34:57.154024   33086 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:34:57.155332   33086 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:34:57.155419   33086 ssh_runner.go:195] Run: crio --version
	I1024 19:34:57.207065   33086 command_runner.go:130] > crio version 1.24.1
	I1024 19:34:57.207081   33086 command_runner.go:130] > Version:          1.24.1
	I1024 19:34:57.207089   33086 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:34:57.207093   33086 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:34:57.207099   33086 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:34:57.207103   33086 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:34:57.207107   33086 command_runner.go:130] > Compiler:         gc
	I1024 19:34:57.207111   33086 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:34:57.207122   33086 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:34:57.207129   33086 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:34:57.207133   33086 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:34:57.207137   33086 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:34:57.207361   33086 ssh_runner.go:195] Run: crio --version
	I1024 19:34:57.246223   33086 command_runner.go:130] > crio version 1.24.1
	I1024 19:34:57.246247   33086 command_runner.go:130] > Version:          1.24.1
	I1024 19:34:57.246259   33086 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:34:57.246266   33086 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:34:57.246275   33086 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:34:57.246282   33086 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:34:57.246289   33086 command_runner.go:130] > Compiler:         gc
	I1024 19:34:57.246296   33086 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:34:57.246319   33086 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:34:57.246336   33086 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:34:57.246347   33086 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:34:57.246356   33086 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:34:57.249174   33086 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:34:57.250482   33086 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:34:57.252916   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:57.253223   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:34:57.253253   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:34:57.253438   33086 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:34:57.257165   33086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:34:57.268232   33086 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:34:57.268303   33086 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:34:57.311672   33086 command_runner.go:130] > {
	I1024 19:34:57.311694   33086 command_runner.go:130] >   "images": [
	I1024 19:34:57.311705   33086 command_runner.go:130] >     {
	I1024 19:34:57.311717   33086 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1024 19:34:57.311725   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:34:57.311734   33086 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 19:34:57.311740   33086 command_runner.go:130] >       ],
	I1024 19:34:57.311751   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:34:57.311764   33086 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1024 19:34:57.311779   33086 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1024 19:34:57.311788   33086 command_runner.go:130] >       ],
	I1024 19:34:57.311797   33086 command_runner.go:130] >       "size": "750414",
	I1024 19:34:57.311806   33086 command_runner.go:130] >       "uid": {
	I1024 19:34:57.311816   33086 command_runner.go:130] >         "value": "65535"
	I1024 19:34:57.311825   33086 command_runner.go:130] >       },
	I1024 19:34:57.311838   33086 command_runner.go:130] >       "username": "",
	I1024 19:34:57.311849   33086 command_runner.go:130] >       "spec": null,
	I1024 19:34:57.311858   33086 command_runner.go:130] >       "pinned": false
	I1024 19:34:57.311868   33086 command_runner.go:130] >     }
	I1024 19:34:57.311873   33086 command_runner.go:130] >   ]
	I1024 19:34:57.311888   33086 command_runner.go:130] > }
	I1024 19:34:57.312014   33086 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 19:34:57.312060   33086 ssh_runner.go:195] Run: which lz4
	I1024 19:34:57.315772   33086 command_runner.go:130] > /usr/bin/lz4
	I1024 19:34:57.315799   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1024 19:34:57.315881   33086 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:34:57.319742   33086 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:34:57.319782   33086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:34:57.319806   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 19:34:59.226198   33086 crio.go:444] Took 1.910335 seconds to copy over tarball
	I1024 19:34:59.226274   33086 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:35:01.994019   33086 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.767714871s)
	I1024 19:35:01.994049   33086 crio.go:451] Took 2.767827 seconds to extract the tarball
	I1024 19:35:01.994059   33086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:35:02.035584   33086 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:35:02.083878   33086 command_runner.go:130] > {
	I1024 19:35:02.083896   33086 command_runner.go:130] >   "images": [
	I1024 19:35:02.083900   33086 command_runner.go:130] >     {
	I1024 19:35:02.083907   33086 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1024 19:35:02.083912   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.083918   33086 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1024 19:35:02.083922   33086 command_runner.go:130] >       ],
	I1024 19:35:02.083927   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.083934   33086 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1024 19:35:02.083942   33086 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1024 19:35:02.083951   33086 command_runner.go:130] >       ],
	I1024 19:35:02.083956   33086 command_runner.go:130] >       "size": "65258016",
	I1024 19:35:02.083963   33086 command_runner.go:130] >       "uid": null,
	I1024 19:35:02.083969   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084002   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084011   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084014   33086 command_runner.go:130] >     },
	I1024 19:35:02.084018   33086 command_runner.go:130] >     {
	I1024 19:35:02.084024   33086 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1024 19:35:02.084028   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084034   33086 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1024 19:35:02.084040   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084044   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084052   33086 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1024 19:35:02.084062   33086 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1024 19:35:02.084067   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084088   33086 command_runner.go:130] >       "size": "31470524",
	I1024 19:35:02.084096   33086 command_runner.go:130] >       "uid": null,
	I1024 19:35:02.084104   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084111   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084115   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084119   33086 command_runner.go:130] >     },
	I1024 19:35:02.084122   33086 command_runner.go:130] >     {
	I1024 19:35:02.084129   33086 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1024 19:35:02.084135   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084140   33086 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1024 19:35:02.084145   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084149   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084159   33086 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1024 19:35:02.084166   33086 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1024 19:35:02.084172   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084177   33086 command_runner.go:130] >       "size": "53621675",
	I1024 19:35:02.084186   33086 command_runner.go:130] >       "uid": null,
	I1024 19:35:02.084190   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084197   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084201   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084210   33086 command_runner.go:130] >     },
	I1024 19:35:02.084213   33086 command_runner.go:130] >     {
	I1024 19:35:02.084221   33086 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1024 19:35:02.084231   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084239   33086 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1024 19:35:02.084249   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084256   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084270   33086 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1024 19:35:02.084284   33086 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1024 19:35:02.084302   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084309   33086 command_runner.go:130] >       "size": "295456551",
	I1024 19:35:02.084313   33086 command_runner.go:130] >       "uid": {
	I1024 19:35:02.084320   33086 command_runner.go:130] >         "value": "0"
	I1024 19:35:02.084324   33086 command_runner.go:130] >       },
	I1024 19:35:02.084330   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084345   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084355   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084364   33086 command_runner.go:130] >     },
	I1024 19:35:02.084375   33086 command_runner.go:130] >     {
	I1024 19:35:02.084389   33086 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1024 19:35:02.084398   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084407   33086 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1024 19:35:02.084416   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084423   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084438   33086 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1024 19:35:02.084451   33086 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1024 19:35:02.084458   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084470   33086 command_runner.go:130] >       "size": "127165392",
	I1024 19:35:02.084474   33086 command_runner.go:130] >       "uid": {
	I1024 19:35:02.084478   33086 command_runner.go:130] >         "value": "0"
	I1024 19:35:02.084483   33086 command_runner.go:130] >       },
	I1024 19:35:02.084487   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084494   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084499   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084505   33086 command_runner.go:130] >     },
	I1024 19:35:02.084508   33086 command_runner.go:130] >     {
	I1024 19:35:02.084517   33086 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1024 19:35:02.084521   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084526   33086 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1024 19:35:02.084533   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084537   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084547   33086 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1024 19:35:02.084557   33086 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1024 19:35:02.084561   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084565   33086 command_runner.go:130] >       "size": "123188534",
	I1024 19:35:02.084571   33086 command_runner.go:130] >       "uid": {
	I1024 19:35:02.084575   33086 command_runner.go:130] >         "value": "0"
	I1024 19:35:02.084581   33086 command_runner.go:130] >       },
	I1024 19:35:02.084585   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084589   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084595   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084599   33086 command_runner.go:130] >     },
	I1024 19:35:02.084604   33086 command_runner.go:130] >     {
	I1024 19:35:02.084610   33086 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1024 19:35:02.084620   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084628   33086 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1024 19:35:02.084631   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084636   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084643   33086 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1024 19:35:02.084652   33086 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1024 19:35:02.084656   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084660   33086 command_runner.go:130] >       "size": "74691991",
	I1024 19:35:02.084667   33086 command_runner.go:130] >       "uid": null,
	I1024 19:35:02.084671   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084675   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084682   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084686   33086 command_runner.go:130] >     },
	I1024 19:35:02.084692   33086 command_runner.go:130] >     {
	I1024 19:35:02.084697   33086 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1024 19:35:02.084702   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084707   33086 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1024 19:35:02.084713   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084719   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084787   33086 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1024 19:35:02.084801   33086 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1024 19:35:02.084804   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084811   33086 command_runner.go:130] >       "size": "61498678",
	I1024 19:35:02.084818   33086 command_runner.go:130] >       "uid": {
	I1024 19:35:02.084825   33086 command_runner.go:130] >         "value": "0"
	I1024 19:35:02.084834   33086 command_runner.go:130] >       },
	I1024 19:35:02.084841   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084851   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084858   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.084866   33086 command_runner.go:130] >     },
	I1024 19:35:02.084873   33086 command_runner.go:130] >     {
	I1024 19:35:02.084886   33086 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1024 19:35:02.084895   33086 command_runner.go:130] >       "repoTags": [
	I1024 19:35:02.084903   33086 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 19:35:02.084915   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084922   33086 command_runner.go:130] >       "repoDigests": [
	I1024 19:35:02.084941   33086 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1024 19:35:02.084953   33086 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1024 19:35:02.084960   33086 command_runner.go:130] >       ],
	I1024 19:35:02.084965   33086 command_runner.go:130] >       "size": "750414",
	I1024 19:35:02.084972   33086 command_runner.go:130] >       "uid": {
	I1024 19:35:02.084976   33086 command_runner.go:130] >         "value": "65535"
	I1024 19:35:02.084982   33086 command_runner.go:130] >       },
	I1024 19:35:02.084986   33086 command_runner.go:130] >       "username": "",
	I1024 19:35:02.084990   33086 command_runner.go:130] >       "spec": null,
	I1024 19:35:02.084996   33086 command_runner.go:130] >       "pinned": false
	I1024 19:35:02.085000   33086 command_runner.go:130] >     }
	I1024 19:35:02.085006   33086 command_runner.go:130] >   ]
	I1024 19:35:02.085009   33086 command_runner.go:130] > }
	I1024 19:35:02.085126   33086 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:35:02.085140   33086 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:35:02.085212   33086 ssh_runner.go:195] Run: crio config
	I1024 19:35:02.135117   33086 command_runner.go:130] ! time="2023-10-24 19:35:02.127289197Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1024 19:35:02.135210   33086 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:35:02.146027   33086 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:35:02.146053   33086 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:35:02.146064   33086 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:35:02.146070   33086 command_runner.go:130] > #
	I1024 19:35:02.146080   33086 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:35:02.146098   33086 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:35:02.146107   33086 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:35:02.146128   33086 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:35:02.146135   33086 command_runner.go:130] > # reload'.
	I1024 19:35:02.146145   33086 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:35:02.146157   33086 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:35:02.146170   33086 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:35:02.146182   33086 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:35:02.146191   33086 command_runner.go:130] > [crio]
	I1024 19:35:02.146201   33086 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:35:02.146212   33086 command_runner.go:130] > # containers images, in this directory.
	I1024 19:35:02.146223   33086 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1024 19:35:02.146241   33086 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:35:02.146252   33086 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1024 19:35:02.146261   33086 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:35:02.146270   33086 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:35:02.146274   33086 command_runner.go:130] > storage_driver = "overlay"
	I1024 19:35:02.146280   33086 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:35:02.146295   33086 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:35:02.146305   33086 command_runner.go:130] > storage_option = [
	I1024 19:35:02.146316   33086 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1024 19:35:02.146324   33086 command_runner.go:130] > ]
	I1024 19:35:02.146335   33086 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:35:02.146348   33086 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:35:02.146358   33086 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:35:02.146368   33086 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:35:02.146381   33086 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:35:02.146391   33086 command_runner.go:130] > # always happen on a node reboot
	I1024 19:35:02.146400   33086 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:35:02.146413   33086 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:35:02.146425   33086 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:35:02.146445   33086 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:35:02.146456   33086 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:35:02.146464   33086 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:35:02.146475   33086 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:35:02.146482   33086 command_runner.go:130] > # internal_wipe = true
	I1024 19:35:02.146489   33086 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:35:02.146498   33086 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:35:02.146504   33086 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:35:02.146510   33086 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:35:02.146516   33086 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:35:02.146522   33086 command_runner.go:130] > [crio.api]
	I1024 19:35:02.146528   33086 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:35:02.146535   33086 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:35:02.146540   33086 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:35:02.146547   33086 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:35:02.146554   33086 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:35:02.146561   33086 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:35:02.146565   33086 command_runner.go:130] > # stream_port = "0"
	I1024 19:35:02.146571   33086 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:35:02.146576   33086 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:35:02.146583   33086 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:35:02.146588   33086 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:35:02.146596   33086 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:35:02.146605   33086 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:35:02.146611   33086 command_runner.go:130] > # minutes.
	I1024 19:35:02.146615   33086 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:35:02.146623   33086 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:35:02.146630   33086 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:35:02.146634   33086 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:35:02.146641   33086 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:35:02.146649   33086 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:35:02.146655   33086 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:35:02.146661   33086 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:35:02.146668   33086 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:35:02.146675   33086 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1024 19:35:02.146683   33086 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:35:02.146689   33086 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1024 19:35:02.146735   33086 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:35:02.146746   33086 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:35:02.146750   33086 command_runner.go:130] > [crio.runtime]
	I1024 19:35:02.146756   33086 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:35:02.146763   33086 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:35:02.146767   33086 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:35:02.146773   33086 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:35:02.146780   33086 command_runner.go:130] > # default_ulimits = [
	I1024 19:35:02.146784   33086 command_runner.go:130] > # ]
	I1024 19:35:02.146792   33086 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:35:02.146799   33086 command_runner.go:130] > # no_pivot = false
	I1024 19:35:02.146805   33086 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:35:02.146813   33086 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:35:02.146818   33086 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:35:02.146827   33086 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:35:02.146832   33086 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:35:02.146841   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:35:02.146846   33086 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1024 19:35:02.146852   33086 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:35:02.146859   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:35:02.146865   33086 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:35:02.146872   33086 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:35:02.146882   33086 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:35:02.146889   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:35:02.146895   33086 command_runner.go:130] > conmon_env = [
	I1024 19:35:02.146904   33086 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1024 19:35:02.146910   33086 command_runner.go:130] > ]
	I1024 19:35:02.146915   33086 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:35:02.146923   33086 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:35:02.146928   33086 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:35:02.146935   33086 command_runner.go:130] > # default_env = [
	I1024 19:35:02.146938   33086 command_runner.go:130] > # ]
	I1024 19:35:02.146943   33086 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:35:02.146948   33086 command_runner.go:130] > # selinux = false
	I1024 19:35:02.146955   33086 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:35:02.146969   33086 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:35:02.146977   33086 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:35:02.146984   33086 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:35:02.146990   33086 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:35:02.146998   33086 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:35:02.147006   33086 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:35:02.147013   33086 command_runner.go:130] > # which might increase security.
	I1024 19:35:02.147018   33086 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1024 19:35:02.147026   33086 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:35:02.147032   33086 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:35:02.147041   33086 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:35:02.147048   33086 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:35:02.147055   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:35:02.147065   33086 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:35:02.147073   33086 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:35:02.147078   33086 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:35:02.147082   33086 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:35:02.147089   33086 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:35:02.147095   33086 command_runner.go:130] > # irqbalance daemon.
	I1024 19:35:02.147101   33086 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:35:02.147111   33086 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:35:02.147116   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:35:02.147121   33086 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:35:02.147128   33086 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:35:02.147135   33086 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:35:02.147141   33086 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:35:02.147148   33086 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:35:02.147154   33086 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:35:02.147162   33086 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:35:02.147167   33086 command_runner.go:130] > # will be added.
	I1024 19:35:02.147172   33086 command_runner.go:130] > # default_capabilities = [
	I1024 19:35:02.147176   33086 command_runner.go:130] > # 	"CHOWN",
	I1024 19:35:02.147182   33086 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:35:02.147185   33086 command_runner.go:130] > # 	"FSETID",
	I1024 19:35:02.147189   33086 command_runner.go:130] > # 	"FOWNER",
	I1024 19:35:02.147195   33086 command_runner.go:130] > # 	"SETGID",
	I1024 19:35:02.147199   33086 command_runner.go:130] > # 	"SETUID",
	I1024 19:35:02.147205   33086 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:35:02.147209   33086 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:35:02.147213   33086 command_runner.go:130] > # 	"KILL",
	I1024 19:35:02.147216   33086 command_runner.go:130] > # ]
	I1024 19:35:02.147225   33086 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:35:02.147233   33086 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:35:02.147237   33086 command_runner.go:130] > # default_sysctls = [
	I1024 19:35:02.147241   33086 command_runner.go:130] > # ]
	I1024 19:35:02.147246   33086 command_runner.go:130] > # List of devices on the host that a
	I1024 19:35:02.147255   33086 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:35:02.147259   33086 command_runner.go:130] > # allowed_devices = [
	I1024 19:35:02.147265   33086 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:35:02.147269   33086 command_runner.go:130] > # ]
	I1024 19:35:02.147277   33086 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:35:02.147284   33086 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:35:02.147292   33086 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:35:02.147335   33086 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:35:02.147348   33086 command_runner.go:130] > # additional_devices = [
	I1024 19:35:02.147353   33086 command_runner.go:130] > # ]
	I1024 19:35:02.147361   33086 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:35:02.147371   33086 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:35:02.147377   33086 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:35:02.147390   33086 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:35:02.147396   33086 command_runner.go:130] > # ]
	I1024 19:35:02.147403   33086 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:35:02.147411   33086 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:35:02.147417   33086 command_runner.go:130] > # Defaults to false.
	I1024 19:35:02.147422   33086 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:35:02.147431   33086 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:35:02.147437   33086 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:35:02.147444   33086 command_runner.go:130] > # hooks_dir = [
	I1024 19:35:02.147449   33086 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:35:02.147455   33086 command_runner.go:130] > # ]
	I1024 19:35:02.147461   33086 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:35:02.147470   33086 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:35:02.147477   33086 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:35:02.147481   33086 command_runner.go:130] > #
	I1024 19:35:02.147487   33086 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:35:02.147496   33086 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:35:02.147501   33086 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:35:02.147509   33086 command_runner.go:130] > #
	I1024 19:35:02.147515   33086 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:35:02.147523   33086 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:35:02.147530   33086 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:35:02.147536   33086 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:35:02.147539   33086 command_runner.go:130] > #
	I1024 19:35:02.147543   33086 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:35:02.147549   33086 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:35:02.147558   33086 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:35:02.147562   33086 command_runner.go:130] > pids_limit = 1024
	I1024 19:35:02.147570   33086 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:35:02.147579   33086 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:35:02.147588   33086 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:35:02.147599   33086 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:35:02.147603   33086 command_runner.go:130] > # log_size_max = -1
	I1024 19:35:02.147611   33086 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:35:02.147617   33086 command_runner.go:130] > # log_to_journald = false
	I1024 19:35:02.147623   33086 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:35:02.147633   33086 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:35:02.147640   33086 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:35:02.147645   33086 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:35:02.147653   33086 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:35:02.147657   33086 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:35:02.147665   33086 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:35:02.147669   33086 command_runner.go:130] > # read_only = false
	I1024 19:35:02.147677   33086 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:35:02.147684   33086 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:35:02.147690   33086 command_runner.go:130] > # live configuration reload.
	I1024 19:35:02.147694   33086 command_runner.go:130] > # log_level = "info"
	I1024 19:35:02.147702   33086 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:35:02.147707   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:35:02.147713   33086 command_runner.go:130] > # log_filter = ""
	I1024 19:35:02.147722   33086 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:35:02.147728   33086 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:35:02.147734   33086 command_runner.go:130] > # separated by comma.
	I1024 19:35:02.147738   33086 command_runner.go:130] > # uid_mappings = ""
	I1024 19:35:02.147748   33086 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:35:02.147754   33086 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:35:02.147761   33086 command_runner.go:130] > # separated by comma.
	I1024 19:35:02.147765   33086 command_runner.go:130] > # gid_mappings = ""
	I1024 19:35:02.147772   33086 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:35:02.147778   33086 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:35:02.147786   33086 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:35:02.147791   33086 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:35:02.147798   33086 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:35:02.147804   33086 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:35:02.147813   33086 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:35:02.147818   33086 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:35:02.147824   33086 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:35:02.147832   33086 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:35:02.147837   33086 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:35:02.147844   33086 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:35:02.147850   33086 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:35:02.147858   33086 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:35:02.147865   33086 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:35:02.147873   33086 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:35:02.147879   33086 command_runner.go:130] > drop_infra_ctr = false
	I1024 19:35:02.147885   33086 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:35:02.147893   33086 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:35:02.147900   33086 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:35:02.147906   33086 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:35:02.147912   33086 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:35:02.147919   33086 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:35:02.147927   33086 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:35:02.147936   33086 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:35:02.147942   33086 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1024 19:35:02.147949   33086 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:35:02.147957   33086 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:35:02.147968   33086 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:35:02.147975   33086 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:35:02.147981   33086 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:35:02.147993   33086 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:35:02.148005   33086 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:35:02.148013   33086 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:35:02.148021   33086 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:35:02.148028   33086 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:35:02.148033   33086 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:35:02.148037   33086 command_runner.go:130] > # ]
	I1024 19:35:02.148043   33086 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:35:02.148051   33086 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:35:02.148058   33086 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:35:02.148066   33086 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:35:02.148071   33086 command_runner.go:130] > #
	I1024 19:35:02.148076   33086 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:35:02.148084   33086 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:35:02.148088   33086 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:35:02.148094   33086 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:35:02.148098   33086 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:35:02.148105   33086 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:35:02.148109   33086 command_runner.go:130] > # Where:
	I1024 19:35:02.148120   33086 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:35:02.148128   33086 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:35:02.148135   33086 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:35:02.148142   33086 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:35:02.148147   33086 command_runner.go:130] > #   in $PATH.
	I1024 19:35:02.148153   33086 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:35:02.148160   33086 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:35:02.148166   33086 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:35:02.148172   33086 command_runner.go:130] > #   state.
	I1024 19:35:02.148178   33086 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:35:02.148186   33086 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:35:02.148192   33086 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:35:02.148200   33086 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:35:02.148206   33086 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:35:02.148217   33086 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:35:02.148223   33086 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:35:02.148233   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:35:02.148242   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:35:02.148250   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:35:02.148258   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:35:02.148266   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:35:02.148274   33086 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:35:02.148281   33086 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:35:02.148292   33086 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:35:02.148303   33086 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:35:02.148310   33086 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:35:02.148321   33086 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1024 19:35:02.148329   33086 command_runner.go:130] > runtime_type = "oci"
	I1024 19:35:02.148337   33086 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:35:02.148345   33086 command_runner.go:130] > runtime_config_path = ""
	I1024 19:35:02.148355   33086 command_runner.go:130] > monitor_path = ""
	I1024 19:35:02.148361   33086 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:35:02.148371   33086 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:35:02.148383   33086 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:35:02.148393   33086 command_runner.go:130] > # running containers
	I1024 19:35:02.148400   33086 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:35:02.148416   33086 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:35:02.148480   33086 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:35:02.148491   33086 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:35:02.148496   33086 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:35:02.148501   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:35:02.148505   33086 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:35:02.148512   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:35:02.148517   33086 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:35:02.148524   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:35:02.148530   33086 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:35:02.148538   33086 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:35:02.148544   33086 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:35:02.148553   33086 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:35:02.148561   33086 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:35:02.148571   33086 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:35:02.148580   33086 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:35:02.148590   33086 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:35:02.148596   33086 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:35:02.148607   33086 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:35:02.148611   33086 command_runner.go:130] > # Example:
	I1024 19:35:02.148616   33086 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:35:02.148624   33086 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:35:02.148629   33086 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:35:02.148636   33086 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:35:02.148640   33086 command_runner.go:130] > # cpuset = 0
	I1024 19:35:02.148646   33086 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:35:02.148650   33086 command_runner.go:130] > # Where:
	I1024 19:35:02.148657   33086 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:35:02.148664   33086 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:35:02.148672   33086 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:35:02.148677   33086 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:35:02.148685   33086 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:35:02.148693   33086 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:35:02.148697   33086 command_runner.go:130] > # 
	I1024 19:35:02.148705   33086 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:35:02.148711   33086 command_runner.go:130] > #
	I1024 19:35:02.148720   33086 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:35:02.148728   33086 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:35:02.148735   33086 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:35:02.148743   33086 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:35:02.148749   33086 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:35:02.148755   33086 command_runner.go:130] > [crio.image]
	I1024 19:35:02.148761   33086 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:35:02.148765   33086 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:35:02.148772   33086 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:35:02.148780   33086 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:35:02.148786   33086 command_runner.go:130] > # global_auth_file = ""
	I1024 19:35:02.148791   33086 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:35:02.148797   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:35:02.148802   33086 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:35:02.148813   33086 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:35:02.148819   33086 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:35:02.148824   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:35:02.148831   33086 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:35:02.148839   33086 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:35:02.148848   33086 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:35:02.148854   33086 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:35:02.148859   33086 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:35:02.148863   33086 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:35:02.148869   33086 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:35:02.148875   33086 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:35:02.148881   33086 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:35:02.148886   33086 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:35:02.148891   33086 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:35:02.148898   33086 command_runner.go:130] > # signature_policy = ""
	I1024 19:35:02.148903   33086 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:35:02.148909   33086 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:35:02.148913   33086 command_runner.go:130] > # changing them here.
	I1024 19:35:02.148917   33086 command_runner.go:130] > # insecure_registries = [
	I1024 19:35:02.148920   33086 command_runner.go:130] > # ]
	I1024 19:35:02.148928   33086 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:35:02.148935   33086 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:35:02.148942   33086 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:35:02.148949   33086 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:35:02.148956   33086 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:35:02.148965   33086 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:35:02.148971   33086 command_runner.go:130] > # CNI plugins.
	I1024 19:35:02.148975   33086 command_runner.go:130] > [crio.network]
	I1024 19:35:02.148983   33086 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:35:02.148989   33086 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:35:02.148996   33086 command_runner.go:130] > # cni_default_network = ""
	I1024 19:35:02.149002   33086 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:35:02.149009   33086 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:35:02.149014   33086 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:35:02.149021   33086 command_runner.go:130] > # plugin_dirs = [
	I1024 19:35:02.149025   33086 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:35:02.149028   33086 command_runner.go:130] > # ]
	I1024 19:35:02.149035   33086 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:35:02.149039   33086 command_runner.go:130] > [crio.metrics]
	I1024 19:35:02.149048   33086 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:35:02.149056   33086 command_runner.go:130] > enable_metrics = true
	I1024 19:35:02.149065   33086 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:35:02.149070   33086 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:35:02.149078   33086 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:35:02.149086   33086 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:35:02.149092   33086 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:35:02.149099   33086 command_runner.go:130] > # metrics_collectors = [
	I1024 19:35:02.149102   33086 command_runner.go:130] > # 	"operations",
	I1024 19:35:02.149107   33086 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:35:02.149112   33086 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:35:02.149118   33086 command_runner.go:130] > # 	"operations_errors",
	I1024 19:35:02.149123   33086 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:35:02.149129   33086 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:35:02.149133   33086 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:35:02.149140   33086 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:35:02.149145   33086 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:35:02.149151   33086 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:35:02.149156   33086 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:35:02.149165   33086 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:35:02.149171   33086 command_runner.go:130] > # 	"containers_oom",
	I1024 19:35:02.149176   33086 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:35:02.149182   33086 command_runner.go:130] > # 	"operations_total",
	I1024 19:35:02.149186   33086 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:35:02.149191   33086 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:35:02.149196   33086 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:35:02.149200   33086 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:35:02.149206   33086 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:35:02.149211   33086 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:35:02.149217   33086 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:35:02.149222   33086 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:35:02.149229   33086 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:35:02.149232   33086 command_runner.go:130] > # ]
	I1024 19:35:02.149237   33086 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:35:02.149244   33086 command_runner.go:130] > # metrics_port = 9090
	I1024 19:35:02.149249   33086 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:35:02.149255   33086 command_runner.go:130] > # metrics_socket = ""
	I1024 19:35:02.149262   33086 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:35:02.149271   33086 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:35:02.149278   33086 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:35:02.149287   33086 command_runner.go:130] > # certificate on any modification event.
	I1024 19:35:02.149293   33086 command_runner.go:130] > # metrics_cert = ""
	I1024 19:35:02.149322   33086 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:35:02.149330   33086 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:35:02.149339   33086 command_runner.go:130] > # metrics_key = ""
	I1024 19:35:02.149351   33086 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:35:02.149361   33086 command_runner.go:130] > [crio.tracing]
	I1024 19:35:02.149369   33086 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:35:02.149379   33086 command_runner.go:130] > # enable_tracing = false
	I1024 19:35:02.149387   33086 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:35:02.149397   33086 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:35:02.149404   33086 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:35:02.149413   33086 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:35:02.149422   33086 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:35:02.149431   33086 command_runner.go:130] > [crio.stats]
	I1024 19:35:02.149444   33086 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:35:02.149453   33086 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:35:02.149457   33086 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:35:02.149528   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:35:02.149538   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:35:02.149555   33086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:35:02.149574   33086 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-632589 NodeName:multinode-632589 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:35:02.149685   33086 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-632589"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:35:02.149755   33086 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-632589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:35:02.149808   33086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:35:02.158673   33086 command_runner.go:130] > kubeadm
	I1024 19:35:02.158691   33086 command_runner.go:130] > kubectl
	I1024 19:35:02.158700   33086 command_runner.go:130] > kubelet
	I1024 19:35:02.158719   33086 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:35:02.158775   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:35:02.168042   33086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1024 19:35:02.183467   33086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:35:02.198182   33086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1024 19:35:02.213773   33086 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I1024 19:35:02.217032   33086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:35:02.228792   33086 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589 for IP: 192.168.39.247
	I1024 19:35:02.228835   33086 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:35:02.229003   33086 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:35:02.229076   33086 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:35:02.229168   33086 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key
	I1024 19:35:02.229243   33086 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key.890e8c75
	I1024 19:35:02.229315   33086 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key
	I1024 19:35:02.229332   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:35:02.229350   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:35:02.229368   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:35:02.229385   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:35:02.229402   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:35:02.229422   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:35:02.229445   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:35:02.229463   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:35:02.229534   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:35:02.229573   33086 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:35:02.229588   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:35:02.229626   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:35:02.229664   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:35:02.229700   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:35:02.229752   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:35:02.229787   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:02.229807   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem -> /usr/share/ca-certificates/16298.pem
	I1024 19:35:02.229824   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /usr/share/ca-certificates/162982.pem
	I1024 19:35:02.230453   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:35:02.253044   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:35:02.275124   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:35:02.297052   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:35:02.327131   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:35:02.350006   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:35:02.372184   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:35:02.394153   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:35:02.415484   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:35:02.437774   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:35:02.459908   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:35:02.482723   33086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:35:02.499437   33086 ssh_runner.go:195] Run: openssl version
	I1024 19:35:02.504670   33086 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1024 19:35:02.504992   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:35:02.516631   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:35:02.521033   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:35:02.521324   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:35:02.521378   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:35:02.526855   33086 command_runner.go:130] > 3ec20f2e
	I1024 19:35:02.526912   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:35:02.538519   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:35:02.549145   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:02.553640   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:02.553802   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:02.553843   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:02.558896   33086 command_runner.go:130] > b5213941
	I1024 19:35:02.559100   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:35:02.569420   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:35:02.581525   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:35:02.586350   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:35:02.586382   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:35:02.586431   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:35:02.591795   33086 command_runner.go:130] > 51391683
	I1024 19:35:02.592123   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:35:02.602932   33086 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:35:02.607374   33086 command_runner.go:130] > ca.crt
	I1024 19:35:02.607393   33086 command_runner.go:130] > ca.key
	I1024 19:35:02.607401   33086 command_runner.go:130] > healthcheck-client.crt
	I1024 19:35:02.607408   33086 command_runner.go:130] > healthcheck-client.key
	I1024 19:35:02.607415   33086 command_runner.go:130] > peer.crt
	I1024 19:35:02.607421   33086 command_runner.go:130] > peer.key
	I1024 19:35:02.607427   33086 command_runner.go:130] > server.crt
	I1024 19:35:02.607443   33086 command_runner.go:130] > server.key
	I1024 19:35:02.607496   33086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 19:35:02.612986   33086 command_runner.go:130] > Certificate will not expire
	I1024 19:35:02.613145   33086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 19:35:02.618649   33086 command_runner.go:130] > Certificate will not expire
	I1024 19:35:02.618919   33086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 19:35:02.624598   33086 command_runner.go:130] > Certificate will not expire
	I1024 19:35:02.624676   33086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 19:35:02.630441   33086 command_runner.go:130] > Certificate will not expire
	I1024 19:35:02.630505   33086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 19:35:02.636102   33086 command_runner.go:130] > Certificate will not expire
	I1024 19:35:02.636399   33086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 19:35:02.641912   33086 command_runner.go:130] > Certificate will not expire
	I1024 19:35:02.642241   33086 kubeadm.go:404] StartCluster: {Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:35:02.642345   33086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:35:02.642380   33086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:35:02.683575   33086 cri.go:89] found id: ""
	I1024 19:35:02.683663   33086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:35:02.693836   33086 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1024 19:35:02.693860   33086 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1024 19:35:02.693869   33086 command_runner.go:130] > /var/lib/minikube/etcd:
	I1024 19:35:02.693874   33086 command_runner.go:130] > member
	I1024 19:35:02.693892   33086 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 19:35:02.693900   33086 kubeadm.go:636] restartCluster start
	I1024 19:35:02.693957   33086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 19:35:02.703512   33086 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:02.704187   33086 kubeconfig.go:92] found "multinode-632589" server: "https://192.168.39.247:8443"
	I1024 19:35:02.704831   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:35:02.705181   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:35:02.706037   33086 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:35:02.706242   33086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 19:35:02.715617   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:02.715665   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:02.726851   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:02.726867   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:02.726899   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:02.737757   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:03.238754   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:03.238847   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:03.250761   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:03.738280   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:03.738340   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:03.750276   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:04.238435   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:04.238527   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:04.251078   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:04.738678   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:04.738751   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:04.750600   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:05.238411   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:05.238491   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:05.250317   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:05.737842   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:05.737920   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:05.749780   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:06.238295   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:06.238405   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:06.250472   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:06.738002   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:06.738072   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:06.749942   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:07.238581   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:07.238651   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:07.251032   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:07.738552   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:07.738629   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:07.750667   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:08.238421   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:08.238500   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:08.250293   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:08.738831   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:08.738906   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:08.750734   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:09.238280   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:09.238346   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:09.250375   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:09.738000   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:09.738076   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:09.750792   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:10.238405   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:10.238479   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:10.250018   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:10.738640   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:10.738716   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:10.750631   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:11.237928   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:11.238020   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:11.250000   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:11.738452   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:11.738530   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:11.750083   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:12.238709   33086 api_server.go:166] Checking apiserver status ...
	I1024 19:35:12.238792   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:12.251197   33086 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:12.715953   33086 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 19:35:12.716020   33086 kubeadm.go:1128] stopping kube-system containers ...
	I1024 19:35:12.716034   33086 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 19:35:12.716102   33086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:35:12.755487   33086 cri.go:89] found id: ""
	I1024 19:35:12.755565   33086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 19:35:12.771482   33086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:35:12.781001   33086 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1024 19:35:12.781026   33086 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1024 19:35:12.781034   33086 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1024 19:35:12.781045   33086 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:35:12.781076   33086 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:35:12.781119   33086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:35:12.791380   33086 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 19:35:12.791401   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:12.916841   33086 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:35:12.916871   33086 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1024 19:35:12.916880   33086 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1024 19:35:12.916886   33086 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 19:35:12.916894   33086 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1024 19:35:12.916900   33086 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1024 19:35:12.916905   33086 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1024 19:35:12.916911   33086 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1024 19:35:12.916918   33086 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1024 19:35:12.916924   33086 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 19:35:12.916932   33086 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 19:35:12.916936   33086 command_runner.go:130] > [certs] Using the existing "sa" key
	I1024 19:35:12.916959   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:12.969911   33086 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:35:13.293803   33086 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:35:13.675240   33086 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:35:14.164429   33086 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:35:14.357612   33086 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:35:14.360597   33086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.443603143s)
	I1024 19:35:14.360631   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:14.550045   33086 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:35:14.550075   33086 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:35:14.550082   33086 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:35:14.550110   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:14.640938   33086 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:35:14.640967   33086 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:35:14.640977   33086 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:35:14.640998   33086 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:35:14.641028   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:14.704733   33086 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:35:14.715430   33086 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:35:14.715517   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:14.730713   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:15.242676   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:15.742536   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:16.241933   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:16.741944   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:17.242868   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:17.261485   33086 command_runner.go:130] > 1077
	I1024 19:35:17.263254   33086 api_server.go:72] duration metric: took 2.547823899s to wait for apiserver process to appear ...
	I1024 19:35:17.263273   33086 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:35:17.263287   33086 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:35:21.198913   33086 api_server.go:279] https://192.168.39.247:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:35:21.198969   33086 api_server.go:103] status: https://192.168.39.247:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:35:21.198984   33086 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:35:21.244243   33086 api_server.go:279] https://192.168.39.247:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1024 19:35:21.244274   33086 api_server.go:103] status: https://192.168.39.247:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1024 19:35:21.744823   33086 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:35:21.751722   33086 api_server.go:279] https://192.168.39.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:35:21.751753   33086 api_server.go:103] status: https://192.168.39.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:35:22.245358   33086 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:35:22.267766   33086 api_server.go:279] https://192.168.39.247:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:35:22.267808   33086 api_server.go:103] status: https://192.168.39.247:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:35:22.744378   33086 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:35:22.749513   33086 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I1024 19:35:22.749594   33086 round_trippers.go:463] GET https://192.168.39.247:8443/version
	I1024 19:35:22.749605   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:22.749617   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:22.749631   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:22.756887   33086 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1024 19:35:22.756906   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:22.756913   33086 round_trippers.go:580]     Content-Length: 264
	I1024 19:35:22.756918   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:22 GMT
	I1024 19:35:22.756923   33086 round_trippers.go:580]     Audit-Id: c9282c9f-c0d9-4efb-8dc5-8b539641e15b
	I1024 19:35:22.756929   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:22.756934   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:22.756938   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:22.756943   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:22.756973   33086 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1024 19:35:22.757036   33086 api_server.go:141] control plane version: v1.28.3
	I1024 19:35:22.757054   33086 api_server.go:131] duration metric: took 5.493775508s to wait for apiserver health ...
	I1024 19:35:22.757061   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:35:22.757068   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:35:22.759178   33086 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:35:22.760599   33086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:35:22.773176   33086 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:35:22.773205   33086 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1024 19:35:22.773215   33086 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1024 19:35:22.773226   33086 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:35:22.773235   33086 command_runner.go:130] > Access: 2023-10-24 19:34:45.736816710 +0000
	I1024 19:35:22.773247   33086 command_runner.go:130] > Modify: 2023-10-16 21:25:26.000000000 +0000
	I1024 19:35:22.773256   33086 command_runner.go:130] > Change: 2023-10-24 19:34:43.720816710 +0000
	I1024 19:35:22.773262   33086 command_runner.go:130] >  Birth: -
	I1024 19:35:22.774423   33086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:35:22.774441   33086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:35:22.808797   33086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:35:24.119089   33086 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:35:24.119114   33086 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:35:24.119120   33086 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1024 19:35:24.119125   33086 command_runner.go:130] > daemonset.apps/kindnet configured
	I1024 19:35:24.119155   33086 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.310333931s)
	I1024 19:35:24.119174   33086 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:35:24.119237   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:24.119245   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.119253   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.119258   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.131706   33086 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1024 19:35:24.131730   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.131737   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.131742   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.131747   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.131752   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.131757   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.131762   33086 round_trippers.go:580]     Audit-Id: 983018bd-f800-4ff8-b9ba-17d1ac035460
	I1024 19:35:24.134102   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83204 chars]
	I1024 19:35:24.139966   33086 system_pods.go:59] 12 kube-system pods found
	I1024 19:35:24.140016   33086 system_pods.go:61] "coredns-5dd5756b68-c5l8s" [20aa782d-e6ed-45ad-b625-556d1a8503c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:35:24.140030   33086 system_pods.go:61] "etcd-multinode-632589" [a84a9833-e3b8-4148-9ee7-3f4479a10186] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 19:35:24.140046   33086 system_pods.go:61] "kindnet-pwmd9" [6e2f396b-dc71-4dd2-8521-ecce4287f61c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:24.140056   33086 system_pods.go:61] "kindnet-qvkwv" [ec1ea359-8477-4d62-ab29-95a048433575] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:24.140065   33086 system_pods.go:61] "kindnet-xh444" [dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:24.140081   33086 system_pods.go:61] "kube-apiserver-multinode-632589" [34fcbf72-bf92-477f-8c1c-b0fd908c561d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 19:35:24.140101   33086 system_pods.go:61] "kube-controller-manager-multinode-632589" [6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 19:35:24.140115   33086 system_pods.go:61] "kube-proxy-6vn7s" [d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b] Running
	I1024 19:35:24.140121   33086 system_pods.go:61] "kube-proxy-gd49s" [a1c573fd-3f4b-4d90-a366-6d859a121185] Running
	I1024 19:35:24.140178   33086 system_pods.go:61] "kube-proxy-vjr8q" [844852b2-3dbb-4d52-a752-b39021adfc04] Running
	I1024 19:35:24.140210   33086 system_pods.go:61] "kube-scheduler-multinode-632589" [e85a7c19-1a25-42f5-81bd-16ed7070ca3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 19:35:24.140222   33086 system_pods.go:61] "storage-provisioner" [4023756b-6e38-476d-8dec-90ea2346dc01] Running
	I1024 19:35:24.140230   33086 system_pods.go:74] duration metric: took 21.049359ms to wait for pod list to return data ...
	I1024 19:35:24.140242   33086 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:35:24.140316   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes
	I1024 19:35:24.140326   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.140337   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.140350   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.150407   33086 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1024 19:35:24.150424   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.150431   33086 round_trippers.go:580]     Audit-Id: 78f9c6fd-b125-4443-89d7-607de50610a2
	I1024 19:35:24.150437   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.150443   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.150448   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.150454   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.150460   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.150759   33086 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 14802 chars]
	I1024 19:35:24.151829   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:35:24.151858   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:35:24.151871   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:35:24.151877   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:35:24.151885   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:35:24.151892   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:35:24.151897   33086 node_conditions.go:105] duration metric: took 11.647691ms to run NodePressure ...
	I1024 19:35:24.151917   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:24.387577   33086 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1024 19:35:24.387605   33086 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1024 19:35:24.387632   33086 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 19:35:24.387758   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1024 19:35:24.387772   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.387784   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.387793   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.393278   33086 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:35:24.393316   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.393326   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.393332   33086 round_trippers.go:580]     Audit-Id: 90c4f22e-b7da-4ae0-9d32-ea8b96e73770
	I1024 19:35:24.393338   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.393344   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.393349   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.393354   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.393932   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"836"},"items":[{"metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"788","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1024 19:35:24.395243   33086 kubeadm.go:787] kubelet initialised
	I1024 19:35:24.395260   33086 kubeadm.go:788] duration metric: took 7.618419ms waiting for restarted kubelet to initialise ...
	I1024 19:35:24.395267   33086 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:35:24.395331   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:24.395339   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.395346   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.395354   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.398813   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:24.398832   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.398839   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.398844   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.398849   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.398854   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.398859   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.398864   33086 round_trippers.go:580]     Audit-Id: 27024aec-787b-4dbe-90a1-f5c56e1763a7
	I1024 19:35:24.400207   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"836"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83204 chars]
	I1024 19:35:24.402685   33086 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:24.402769   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:24.402781   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.402792   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.402803   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.404717   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:24.404734   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.404744   33086 round_trippers.go:580]     Audit-Id: b714511e-1b2b-4cde-aa64-eddfe22ac31f
	I1024 19:35:24.404753   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.404762   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.404774   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.404779   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.404784   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.404916   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:24.405388   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:24.405402   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.405413   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.405422   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.407292   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:24.407304   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.407310   33086 round_trippers.go:580]     Audit-Id: 35408aab-6890-47f2-b4c3-382e4f0fdb19
	I1024 19:35:24.407315   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.407326   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.407331   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.407337   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.407346   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.407671   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:24.408036   33086 pod_ready.go:97] node "multinode-632589" hosting pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.408056   33086 pod_ready.go:81] duration metric: took 5.350872ms waiting for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	E1024 19:35:24.408067   33086 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-632589" hosting pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.408081   33086 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:24.408136   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-632589
	I1024 19:35:24.408148   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.408158   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.408171   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.410344   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:24.410362   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.410373   33086 round_trippers.go:580]     Audit-Id: f5dd309f-6177-4a20-b928-40de6e746855
	I1024 19:35:24.410381   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.410395   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.410404   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.410409   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.410415   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.410567   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"788","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1024 19:35:24.410917   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:24.410930   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.410940   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.410965   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.412768   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:24.412780   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.412785   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.412792   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.412800   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.412810   33086 round_trippers.go:580]     Audit-Id: 8411de9f-9eea-4071-91ef-441f20610ef2
	I1024 19:35:24.412825   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.412832   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.412964   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:24.413251   33086 pod_ready.go:97] node "multinode-632589" hosting pod "etcd-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.413268   33086 pod_ready.go:81] duration metric: took 5.179608ms waiting for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	E1024 19:35:24.413278   33086 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-632589" hosting pod "etcd-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.413311   33086 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:24.413366   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:24.413376   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.413386   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.413397   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.415426   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:24.415437   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.415443   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.415448   33086 round_trippers.go:580]     Audit-Id: 05045fdd-5aaf-44ef-bfa0-6ed4c2d7b1a2
	I1024 19:35:24.415453   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.415461   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.415469   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.415492   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.415675   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"789","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1024 19:35:24.416090   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:24.416108   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.416118   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.416126   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.417935   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:24.417954   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.417965   33086 round_trippers.go:580]     Audit-Id: d6f6a8b7-be7b-4c20-a9f6-897bf8d44243
	I1024 19:35:24.417975   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.417984   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.417993   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.418000   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.418005   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.418130   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:24.418489   33086 pod_ready.go:97] node "multinode-632589" hosting pod "kube-apiserver-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.418515   33086 pod_ready.go:81] duration metric: took 5.192641ms waiting for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	E1024 19:35:24.418530   33086 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-632589" hosting pod "kube-apiserver-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.418544   33086 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:24.418603   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-632589
	I1024 19:35:24.418615   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.418629   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.418638   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.420946   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:24.420962   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.420971   33086 round_trippers.go:580]     Audit-Id: 07db34b6-598f-4f14-9e42-b9e3c599c061
	I1024 19:35:24.420979   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.420987   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.421005   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.421017   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.421030   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.421192   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-632589","namespace":"kube-system","uid":"6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6","resourceVersion":"790","creationTimestamp":"2023-10-24T19:24:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.mirror":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.seen":"2023-10-24T19:24:47.530352200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1024 19:35:24.519907   33086 request.go:629] Waited for 98.254585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:24.519980   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:24.519985   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.519992   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.520001   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.522854   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:24.522872   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.522878   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.522884   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.522892   33086 round_trippers.go:580]     Audit-Id: 5c0e42e1-f9aa-42ea-ba7e-caf801523ce5
	I1024 19:35:24.522904   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.522916   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.522927   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.523072   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:24.523357   33086 pod_ready.go:97] node "multinode-632589" hosting pod "kube-controller-manager-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.523372   33086 pod_ready.go:81] duration metric: took 104.820439ms waiting for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	E1024 19:35:24.523380   33086 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-632589" hosting pod "kube-controller-manager-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:24.523386   33086 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:24.719755   33086 request.go:629] Waited for 196.290301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:35:24.719811   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:35:24.719820   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.719833   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.719842   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.723672   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:24.723692   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.723699   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.723704   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.723710   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.723715   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.723720   33086 round_trippers.go:580]     Audit-Id: d677409b-5a2b-4f42-919a-0e048c313e27
	I1024 19:35:24.723725   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.724156   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"505","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1024 19:35:24.919549   33086 request.go:629] Waited for 194.909628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:35:24.919613   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:35:24.919619   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:24.919627   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:24.919633   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:24.922283   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:24.922301   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:24.922308   33086 round_trippers.go:580]     Audit-Id: 33fb4026-76cd-40d6-bc86-518fbfeace8f
	I1024 19:35:24.922314   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:24.922319   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:24.922328   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:24.922341   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:24.922353   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:24 GMT
	I1024 19:35:24.922562   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"571","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3684 chars]
	I1024 19:35:24.922899   33086 pod_ready.go:92] pod "kube-proxy-6vn7s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:24.922918   33086 pod_ready.go:81] duration metric: took 399.525846ms waiting for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:24.922931   33086 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:25.119993   33086 request.go:629] Waited for 196.977027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:35:25.120089   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:35:25.120103   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:25.120117   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:25.120126   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:25.123201   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:25.123217   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:25.123223   33086 round_trippers.go:580]     Audit-Id: f03d5773-7b06-422c-a9e9-111411598238
	I1024 19:35:25.123228   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:25.123233   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:25.123238   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:25.123243   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:25.123248   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:25 GMT
	I1024 19:35:25.123433   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gd49s","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1c573fd-3f4b-4d90-a366-6d859a121185","resourceVersion":"834","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:35:25.320209   33086 request.go:629] Waited for 196.366201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:25.320258   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:25.320262   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:25.320270   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:25.320275   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:25.322593   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:25.322613   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:25.322619   33086 round_trippers.go:580]     Audit-Id: d65cf835-ddd4-4200-8e7c-f650b3d4a595
	I1024 19:35:25.322626   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:25.322634   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:25.322644   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:25.322652   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:25.322665   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:25 GMT
	I1024 19:35:25.322847   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:25.323315   33086 pod_ready.go:97] node "multinode-632589" hosting pod "kube-proxy-gd49s" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:25.323339   33086 pod_ready.go:81] duration metric: took 400.397129ms waiting for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	E1024 19:35:25.323352   33086 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-632589" hosting pod "kube-proxy-gd49s" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:25.323367   33086 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:25.519784   33086 request.go:629] Waited for 196.339942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:35:25.519880   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:35:25.519891   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:25.519902   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:25.519915   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:25.523527   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:25.523547   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:25.523555   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:25.523560   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:25.523566   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:25.523577   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:25 GMT
	I1024 19:35:25.523593   33086 round_trippers.go:580]     Audit-Id: 01d9f333-eb7f-492e-b543-b65f544e0291
	I1024 19:35:25.523602   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:25.523763   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vjr8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"844852b2-3dbb-4d52-a752-b39021adfc04","resourceVersion":"706","creationTimestamp":"2023-10-24T19:26:43Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:26:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1024 19:35:25.719491   33086 request.go:629] Waited for 195.273816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:35:25.719552   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:35:25.719557   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:25.719565   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:25.719571   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:25.722212   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:25.722232   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:25.722240   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:25.722245   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:25.722250   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:25 GMT
	I1024 19:35:25.722255   33086 round_trippers.go:580]     Audit-Id: 07f1999c-1b0b-4a7b-a706-7f60243f8cea
	I1024 19:35:25.722261   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:25.722266   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:25.722458   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m03","uid":"b46ce2c5-5d6c-4894-ad88-10111966a53a","resourceVersion":"839","creationTimestamp":"2023-10-24T19:27:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:27:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3532 chars]
	I1024 19:35:25.722727   33086 pod_ready.go:92] pod "kube-proxy-vjr8q" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:25.722741   33086 pod_ready.go:81] duration metric: took 399.366282ms waiting for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:25.722750   33086 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:25.919997   33086 request.go:629] Waited for 197.175658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:35:25.920056   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:35:25.920061   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:25.920069   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:25.920076   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:25.923377   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:25.923397   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:25.923406   33086 round_trippers.go:580]     Audit-Id: 1443f595-212e-4a89-87c1-6d54f12539ff
	I1024 19:35:25.923414   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:25.923422   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:25.923429   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:25.923436   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:25.923444   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:25 GMT
	I1024 19:35:25.924127   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-632589","namespace":"kube-system","uid":"e85a7c19-1a25-42f5-81bd-16ed7070ca3c","resourceVersion":"792","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.mirror":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.seen":"2023-10-24T19:24:56.213306721Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1024 19:35:26.119871   33086 request.go:629] Waited for 195.386811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:26.119922   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:26.119931   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:26.119941   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:26.119947   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:26.122533   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:26.122552   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:26.122559   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:26.122565   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:26.122572   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:26.122583   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:26 GMT
	I1024 19:35:26.122591   33086 round_trippers.go:580]     Audit-Id: 384c9ef2-9916-425d-8b9a-410de5c2cc14
	I1024 19:35:26.122600   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:26.122710   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:26.122994   33086 pod_ready.go:97] node "multinode-632589" hosting pod "kube-scheduler-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:26.123012   33086 pod_ready.go:81] duration metric: took 400.25731ms waiting for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	E1024 19:35:26.123020   33086 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-632589" hosting pod "kube-scheduler-multinode-632589" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-632589" has status "Ready":"False"
	I1024 19:35:26.123029   33086 pod_ready.go:38] duration metric: took 1.727751956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:35:26.123043   33086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:35:26.136247   33086 command_runner.go:130] > -16
	I1024 19:35:26.136280   33086 ops.go:34] apiserver oom_adj: -16
	I1024 19:35:26.136288   33086 kubeadm.go:640] restartCluster took 23.4423817s
	I1024 19:35:26.136295   33086 kubeadm.go:406] StartCluster complete in 23.494059914s
	I1024 19:35:26.136308   33086 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:35:26.136388   33086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:35:26.137096   33086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:35:26.137388   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:35:26.137554   33086 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:35:26.140475   33086 out.go:177] * Enabled addons: 
	I1024 19:35:26.137751   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:26.137753   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:35:26.141985   33086 addons.go:502] enable addons completed in 4.435233ms: enabled=[]
	I1024 19:35:26.142342   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:35:26.142781   33086 round_trippers.go:463] GET https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:35:26.142796   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:26.142808   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:26.142818   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:26.145850   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:26.145866   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:26.145875   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:26.145883   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:26.145890   33086 round_trippers.go:580]     Content-Length: 291
	I1024 19:35:26.145898   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:26 GMT
	I1024 19:35:26.145909   33086 round_trippers.go:580]     Audit-Id: ab176e6e-d6eb-48c1-8a78-e649531338c9
	I1024 19:35:26.145929   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:26.145944   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:26.146013   33086 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"835","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1024 19:35:26.146267   33086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-632589" context rescaled to 1 replicas
	I1024 19:35:26.146308   33086 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:35:26.148035   33086 out.go:177] * Verifying Kubernetes components...
	I1024 19:35:26.149517   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:35:26.243495   33086 command_runner.go:130] > apiVersion: v1
	I1024 19:35:26.243518   33086 command_runner.go:130] > data:
	I1024 19:35:26.243525   33086 command_runner.go:130] >   Corefile: |
	I1024 19:35:26.243531   33086 command_runner.go:130] >     .:53 {
	I1024 19:35:26.243538   33086 command_runner.go:130] >         log
	I1024 19:35:26.243556   33086 command_runner.go:130] >         errors
	I1024 19:35:26.243563   33086 command_runner.go:130] >         health {
	I1024 19:35:26.243570   33086 command_runner.go:130] >            lameduck 5s
	I1024 19:35:26.243577   33086 command_runner.go:130] >         }
	I1024 19:35:26.243588   33086 command_runner.go:130] >         ready
	I1024 19:35:26.243603   33086 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1024 19:35:26.243610   33086 command_runner.go:130] >            pods insecure
	I1024 19:35:26.243630   33086 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1024 19:35:26.243653   33086 command_runner.go:130] >            ttl 30
	I1024 19:35:26.243659   33086 command_runner.go:130] >         }
	I1024 19:35:26.243668   33086 command_runner.go:130] >         prometheus :9153
	I1024 19:35:26.243674   33086 command_runner.go:130] >         hosts {
	I1024 19:35:26.243683   33086 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1024 19:35:26.243691   33086 command_runner.go:130] >            fallthrough
	I1024 19:35:26.243701   33086 command_runner.go:130] >         }
	I1024 19:35:26.243708   33086 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1024 19:35:26.243719   33086 command_runner.go:130] >            max_concurrent 1000
	I1024 19:35:26.243728   33086 command_runner.go:130] >         }
	I1024 19:35:26.243740   33086 command_runner.go:130] >         cache 30
	I1024 19:35:26.243750   33086 command_runner.go:130] >         loop
	I1024 19:35:26.243758   33086 command_runner.go:130] >         reload
	I1024 19:35:26.243769   33086 command_runner.go:130] >         loadbalance
	I1024 19:35:26.243775   33086 command_runner.go:130] >     }
	I1024 19:35:26.243784   33086 command_runner.go:130] > kind: ConfigMap
	I1024 19:35:26.243789   33086 command_runner.go:130] > metadata:
	I1024 19:35:26.243801   33086 command_runner.go:130] >   creationTimestamp: "2023-10-24T19:24:56Z"
	I1024 19:35:26.243811   33086 command_runner.go:130] >   name: coredns
	I1024 19:35:26.243818   33086 command_runner.go:130] >   namespace: kube-system
	I1024 19:35:26.243828   33086 command_runner.go:130] >   resourceVersion: "393"
	I1024 19:35:26.243839   33086 command_runner.go:130] >   uid: 2aabb006-845c-4eef-a802-37bc2ba3f811
	I1024 19:35:26.246238   33086 node_ready.go:35] waiting up to 6m0s for node "multinode-632589" to be "Ready" ...
	I1024 19:35:26.246308   33086 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:35:26.319587   33086 request.go:629] Waited for 73.244216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:26.319673   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:26.319682   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:26.319693   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:26.319708   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:26.322316   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:26.322337   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:26.322346   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:26.322353   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:26.322361   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:26 GMT
	I1024 19:35:26.322368   33086 round_trippers.go:580]     Audit-Id: aa0d1b66-80dd-4138-b541-2110d6bad212
	I1024 19:35:26.322376   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:26.322384   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:26.322600   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:26.519303   33086 request.go:629] Waited for 196.281223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:26.519398   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:26.519408   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:26.519420   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:26.519433   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:26.522139   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:26.522156   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:26.522163   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:26 GMT
	I1024 19:35:26.522169   33086 round_trippers.go:580]     Audit-Id: f11b672c-6317-41e6-a157-2f8cd0632bcf
	I1024 19:35:26.522174   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:26.522179   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:26.522184   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:26.522189   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:26.522388   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:27.023586   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:27.023604   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.023613   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.023618   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.026222   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:27.026246   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.026256   33086 round_trippers.go:580]     Audit-Id: ac54e62b-e0ad-41e5-83ea-14030e6e4d52
	I1024 19:35:27.026264   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.026277   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.026285   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.026293   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.026302   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.026560   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"730","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1024 19:35:27.523194   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:27.523213   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.523221   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.523227   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.525676   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:27.525691   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.525698   33086 round_trippers.go:580]     Audit-Id: 5bb4e352-3176-47ce-b239-618ff7dab78a
	I1024 19:35:27.525703   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.525709   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.525717   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.525724   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.525733   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.526072   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:27.526429   33086 node_ready.go:49] node "multinode-632589" has status "Ready":"True"
	I1024 19:35:27.526463   33086 node_ready.go:38] duration metric: took 1.280190406s waiting for node "multinode-632589" to be "Ready" ...
	I1024 19:35:27.526480   33086 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:35:27.526550   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:27.526557   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.526565   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.526573   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.530855   33086 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:35:27.530875   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.530886   33086 round_trippers.go:580]     Audit-Id: f29fa330-0c99-49e4-929a-9e4ec033ebce
	I1024 19:35:27.530894   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.530907   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.530917   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.530925   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.530937   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.532822   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"847"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82954 chars]
	I1024 19:35:27.535578   33086 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:27.535651   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:27.535661   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.535672   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.535682   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.538020   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:27.538041   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.538051   33086 round_trippers.go:580]     Audit-Id: 2476997c-7fa7-4ed1-9830-8cea53004bda
	I1024 19:35:27.538060   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.538066   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.538074   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.538083   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.538100   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.538295   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:27.538740   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:27.538753   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.538760   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.538767   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.540699   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:27.540717   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.540726   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.540735   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.540743   33086 round_trippers.go:580]     Audit-Id: 349f489d-403c-4737-b9c1-d09ba5066f23
	I1024 19:35:27.540751   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.540763   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.540779   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.540969   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:27.719662   33086 request.go:629] Waited for 178.350767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:27.719749   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:27.719756   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.719764   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.719774   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.722576   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:27.722591   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.722598   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.722603   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.722608   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.722613   33086 round_trippers.go:580]     Audit-Id: 69724f51-1822-4fed-9d4f-f8f6273e25b4
	I1024 19:35:27.722618   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.722623   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.722804   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:27.919693   33086 request.go:629] Waited for 196.369722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:27.919737   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:27.919749   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:27.919770   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:27.919784   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:27.922326   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:27.922357   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:27.922367   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:27.922374   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:27.922382   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:27 GMT
	I1024 19:35:27.922390   33086 round_trippers.go:580]     Audit-Id: d5b56707-835c-4555-8ac7-449b8c7afce9
	I1024 19:35:27.922402   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:27.922411   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:27.922572   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:28.423651   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:28.423672   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:28.423680   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:28.423686   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:28.426361   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:28.426381   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:28.426391   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:28.426399   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:28 GMT
	I1024 19:35:28.426408   33086 round_trippers.go:580]     Audit-Id: 2b619528-749e-45c5-be51-3411cf4c37fb
	I1024 19:35:28.426424   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:28.426434   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:28.426446   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:28.426726   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:28.427276   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:28.427296   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:28.427305   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:28.427317   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:28.429608   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:28.429623   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:28.429633   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:28.429645   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:28.429656   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:28.429665   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:28 GMT
	I1024 19:35:28.429678   33086 round_trippers.go:580]     Audit-Id: 9e239fe9-7974-44a6-aca6-279224b4a1fd
	I1024 19:35:28.429686   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:28.429895   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:28.923528   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:28.923550   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:28.923558   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:28.923564   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:28.929205   33086 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:35:28.929223   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:28.929233   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:28.929241   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:28.929249   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:28.929265   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:28 GMT
	I1024 19:35:28.929273   33086 round_trippers.go:580]     Audit-Id: e2ba6726-872f-49a7-a047-3f5971a7bdd5
	I1024 19:35:28.929283   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:28.929537   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:28.930169   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:28.930187   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:28.930197   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:28.930206   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:28.933175   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:28.933187   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:28.933193   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:28 GMT
	I1024 19:35:28.933199   33086 round_trippers.go:580]     Audit-Id: e2d845bd-0a64-44a1-9439-2d51af78252a
	I1024 19:35:28.933210   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:28.933219   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:28.933227   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:28.933236   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:28.933645   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:29.423269   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:29.423293   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:29.423301   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:29.423308   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:29.426365   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:29.426387   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:29.426398   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:29.426408   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:29.426416   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:29.426424   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:29.426433   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:29 GMT
	I1024 19:35:29.426446   33086 round_trippers.go:580]     Audit-Id: c585ea93-1ab5-47e7-967c-ae66328e29f1
	I1024 19:35:29.426600   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:29.427042   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:29.427057   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:29.427068   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:29.427077   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:29.429343   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:29.429380   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:29.429390   33086 round_trippers.go:580]     Audit-Id: 68b668f0-a833-4130-a4e6-04df4fc744f5
	I1024 19:35:29.429398   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:29.429406   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:29.429414   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:29.429422   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:29.429430   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:29 GMT
	I1024 19:35:29.429571   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:29.923227   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:29.923264   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:29.923273   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:29.923282   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:29.925776   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:29.925800   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:29.925810   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:29 GMT
	I1024 19:35:29.925819   33086 round_trippers.go:580]     Audit-Id: d5fc4737-8326-4a4a-af28-c3a37ea61d20
	I1024 19:35:29.925827   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:29.925834   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:29.925843   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:29.925851   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:29.926039   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:29.926487   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:29.926503   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:29.926514   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:29.926523   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:29.932145   33086 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:35:29.932161   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:29.932169   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:29 GMT
	I1024 19:35:29.932178   33086 round_trippers.go:580]     Audit-Id: 21369cc6-610e-4fd9-9daf-e9faffbc5bd3
	I1024 19:35:29.932186   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:29.932195   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:29.932208   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:29.932216   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:29.932324   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:29.932729   33086 pod_ready.go:102] pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace has status "Ready":"False"
	I1024 19:35:30.423935   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:30.423970   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:30.423982   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:30.423993   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:30.428370   33086 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:35:30.428395   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:30.428405   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:30.428415   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:30 GMT
	I1024 19:35:30.428428   33086 round_trippers.go:580]     Audit-Id: b8d043c4-d391-40af-93ba-c3d778da50aa
	I1024 19:35:30.428437   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:30.428455   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:30.428465   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:30.428974   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:30.429739   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:30.429752   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:30.429763   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:30.429771   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:30.435167   33086 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:35:30.435190   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:30.435200   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:30 GMT
	I1024 19:35:30.435210   33086 round_trippers.go:580]     Audit-Id: 91f5067e-59e4-43b3-8ade-b1f5e4be9b0e
	I1024 19:35:30.435219   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:30.435228   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:30.435237   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:30.435246   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:30.435400   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:30.923821   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:30.923843   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:30.923863   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:30.923869   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:30.926908   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:30.926930   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:30.926940   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:30 GMT
	I1024 19:35:30.926967   33086 round_trippers.go:580]     Audit-Id: 2baf148b-2990-449a-8056-0b373a34f6b3
	I1024 19:35:30.926981   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:30.926990   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:30.927002   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:30.927013   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:30.927168   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"794","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1024 19:35:30.927720   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:30.927738   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:30.927749   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:30.927758   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:30.929537   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:30.929579   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:30.929597   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:30.929606   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:30.929611   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:30.929616   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:30 GMT
	I1024 19:35:30.929621   33086 round_trippers.go:580]     Audit-Id: 8c1f1a37-1b7d-4b6f-8da7-1fb6e9ccef0d
	I1024 19:35:30.929627   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:30.929927   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:31.423579   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:35:31.423599   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.423607   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.423613   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.426744   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:31.426763   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.426773   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.426785   33086 round_trippers.go:580]     Audit-Id: 18107766-2adf-4412-8aa8-e68e97d00f59
	I1024 19:35:31.426791   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.426799   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.426805   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.426812   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.427120   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1024 19:35:31.427522   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:31.427533   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.427542   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.427548   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.429362   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:31.429377   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.429384   33086 round_trippers.go:580]     Audit-Id: 5fead0a2-f693-42a6-8ff0-3844165116a9
	I1024 19:35:31.429389   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.429394   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.429401   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.429413   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.429421   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.429541   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:31.429806   33086 pod_ready.go:92] pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:31.429818   33086 pod_ready.go:81] duration metric: took 3.894219007s waiting for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:31.429845   33086 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:31.429891   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-632589
	I1024 19:35:31.429899   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.429911   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.429922   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.432129   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:31.432146   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.432156   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.432164   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.432176   33086 round_trippers.go:580]     Audit-Id: b9d553f7-2608-481f-aebd-c5ba88002f59
	I1024 19:35:31.432184   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.432198   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.432206   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.432394   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"849","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1024 19:35:31.432721   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:31.432733   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.432739   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.432746   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.434517   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:31.434533   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.434539   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.434544   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.434550   33086 round_trippers.go:580]     Audit-Id: be4167aa-f7bf-4c62-8822-c7dd3b9e4e65
	I1024 19:35:31.434557   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.434562   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.434575   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.434871   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:31.435143   33086 pod_ready.go:92] pod "etcd-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:31.435155   33086 pod_ready.go:81] duration metric: took 5.304779ms waiting for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:31.435169   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:31.435211   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:31.435218   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.435225   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.435231   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.436934   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:35:31.436950   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.436959   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.436967   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.436975   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.436984   33086 round_trippers.go:580]     Audit-Id: bf7c8ada-e6ed-4737-aaac-a296bfd9ed53
	I1024 19:35:31.436993   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.437008   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.437220   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"789","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1024 19:35:31.519857   33086 request.go:629] Waited for 82.248884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:31.519995   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:31.520006   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.520013   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.520019   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.522845   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:31.522866   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.522874   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.522880   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.522885   33086 round_trippers.go:580]     Audit-Id: 67a022d8-8167-4062-bdb1-e3f137b8e470
	I1024 19:35:31.522893   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.522902   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.522913   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.523076   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:31.719855   33086 request.go:629] Waited for 196.431312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:31.719936   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:31.719943   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.719950   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.719957   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.723043   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:31.723067   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.723077   33086 round_trippers.go:580]     Audit-Id: 9f2e32c5-95c4-4416-a23c-126b3ff57bf6
	I1024 19:35:31.723083   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.723088   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.723094   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.723099   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.723104   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.724063   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"789","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1024 19:35:31.919866   33086 request.go:629] Waited for 195.361344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:31.919920   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:31.919926   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:31.919933   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:31.919943   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:31.923089   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:31.923109   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:31.923116   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:31.923125   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:31.923137   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:31.923146   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:31 GMT
	I1024 19:35:31.923156   33086 round_trippers.go:580]     Audit-Id: cb4911d2-561c-4a11-84fe-01e3a78105e3
	I1024 19:35:31.923165   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:31.923322   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:32.424147   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:32.424172   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:32.424180   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:32.424187   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:32.426979   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:32.426998   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:32.427008   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:32.427016   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:32.427024   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:32.427030   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:32.427038   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:32 GMT
	I1024 19:35:32.427045   33086 round_trippers.go:580]     Audit-Id: b3e3d1a1-3384-4943-b1d4-9d6ff3bc8ec3
	I1024 19:35:32.427249   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"789","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1024 19:35:32.427715   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:32.427731   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:32.427739   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:32.427747   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:32.429953   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:32.429974   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:32.429984   33086 round_trippers.go:580]     Audit-Id: 81b9e5a5-e0c7-48cc-ade4-e91af703db7e
	I1024 19:35:32.429999   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:32.430007   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:32.430019   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:32.430030   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:32.430039   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:32 GMT
	I1024 19:35:32.430219   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:32.923948   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:32.923969   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:32.923977   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:32.923983   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:32.928268   33086 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:35:32.928286   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:32.928292   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:32 GMT
	I1024 19:35:32.928297   33086 round_trippers.go:580]     Audit-Id: 4e8892b8-7925-4175-932f-dbd6ac0faa27
	I1024 19:35:32.928303   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:32.928314   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:32.928323   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:32.928346   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:32.928934   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"789","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1024 19:35:32.929470   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:32.929495   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:32.929506   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:32.929517   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:32.931991   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:32.932006   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:32.932012   33086 round_trippers.go:580]     Audit-Id: 74583fce-b5eb-4f2f-aa3e-1f3821001677
	I1024 19:35:32.932017   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:32.932023   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:32.932033   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:32.932041   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:32.932048   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:32 GMT
	I1024 19:35:32.932256   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:33.424521   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:35:33.424555   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:33.424566   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:33.424576   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:33.427563   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:33.427582   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:33.427592   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:33.427600   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:33.427607   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:33.427616   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:33 GMT
	I1024 19:35:33.427629   33086 round_trippers.go:580]     Audit-Id: 3314b881-f9e3-46a0-a0c0-90828c4e6b56
	I1024 19:35:33.427639   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:33.427960   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"868","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1024 19:35:33.428382   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:33.428395   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:33.428403   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:33.428409   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:33.430481   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:33.430498   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:33.430506   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:33.430513   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:33.430520   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:33 GMT
	I1024 19:35:33.430528   33086 round_trippers.go:580]     Audit-Id: a833d33a-adb3-42f0-9e29-0a5f85cabf78
	I1024 19:35:33.430537   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:33.430546   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:33.430804   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:33.431081   33086 pod_ready.go:92] pod "kube-apiserver-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:33.431095   33086 pod_ready.go:81] duration metric: took 1.995917182s waiting for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:33.431104   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:33.431160   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-632589
	I1024 19:35:33.431167   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:33.431174   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:33.431180   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:33.433395   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:33.433417   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:33.433427   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:33.433434   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:33 GMT
	I1024 19:35:33.433442   33086 round_trippers.go:580]     Audit-Id: 4fc799b0-3b2d-4168-ac17-2b6209583910
	I1024 19:35:33.433450   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:33.433462   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:33.433477   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:33.433578   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-632589","namespace":"kube-system","uid":"6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6","resourceVersion":"850","creationTimestamp":"2023-10-24T19:24:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.mirror":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.seen":"2023-10-24T19:24:47.530352200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1024 19:35:33.520200   33086 request.go:629] Waited for 86.256779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:33.520262   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:33.520266   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:33.520276   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:33.520284   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:33.522798   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:33.522817   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:33.522832   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:33.522842   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:33.522852   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:33 GMT
	I1024 19:35:33.522881   33086 round_trippers.go:580]     Audit-Id: 25c78e61-41e7-4848-9702-c86b58a6ef3d
	I1024 19:35:33.522894   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:33.522903   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:33.523245   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:33.523614   33086 pod_ready.go:92] pod "kube-controller-manager-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:33.523633   33086 pod_ready.go:81] duration metric: took 92.519726ms waiting for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:33.523659   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:33.719943   33086 request.go:629] Waited for 196.218672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:35:33.720000   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:35:33.720005   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:33.720013   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:33.720018   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:33.722656   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:33.722675   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:33.722685   33086 round_trippers.go:580]     Audit-Id: 61e80b77-a984-4a98-9431-8f624413d1a1
	I1024 19:35:33.722692   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:33.722699   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:33.722707   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:33.722715   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:33.722728   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:33 GMT
	I1024 19:35:33.723163   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"505","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1024 19:35:33.919908   33086 request.go:629] Waited for 196.362448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:35:33.919959   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:35:33.919964   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:33.919971   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:33.919977   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:33.922581   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:33.922608   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:33.922619   33086 round_trippers.go:580]     Audit-Id: 01d4dbc2-4430-47ed-b89d-5345159e794a
	I1024 19:35:33.922627   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:33.922635   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:33.922643   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:33.922651   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:33.922659   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:33 GMT
	I1024 19:35:33.922973   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484","resourceVersion":"847","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3684 chars]
	I1024 19:35:33.923298   33086 pod_ready.go:92] pod "kube-proxy-6vn7s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:33.923315   33086 pod_ready.go:81] duration metric: took 399.648099ms waiting for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:33.923324   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:34.119734   33086 request.go:629] Waited for 196.35749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:35:34.119795   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:35:34.119800   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:34.119807   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:34.119813   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:34.124614   33086 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:35:34.124635   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:34.124644   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:34.124653   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:34.124659   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:34 GMT
	I1024 19:35:34.124666   33086 round_trippers.go:580]     Audit-Id: 87d61fbd-c19f-41ba-b361-55f1dd9a10f3
	I1024 19:35:34.124674   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:34.124682   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:34.125041   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gd49s","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1c573fd-3f4b-4d90-a366-6d859a121185","resourceVersion":"834","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:35:34.319794   33086 request.go:629] Waited for 194.338132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:34.319844   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:34.319849   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:34.319857   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:34.319863   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:34.322258   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:34.322284   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:34.322294   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:34.322303   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:34.322311   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:34.322320   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:34.322329   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:34 GMT
	I1024 19:35:34.322336   33086 round_trippers.go:580]     Audit-Id: 0d4c6e10-14b5-4aba-861a-ff03d441040a
	I1024 19:35:34.322538   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:34.322881   33086 pod_ready.go:92] pod "kube-proxy-gd49s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:34.322905   33086 pod_ready.go:81] duration metric: took 399.574338ms waiting for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:34.322920   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:34.519283   33086 request.go:629] Waited for 196.279917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:35:34.519351   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:35:34.519357   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:34.519365   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:34.519371   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:34.522107   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:34.522127   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:34.522137   33086 round_trippers.go:580]     Audit-Id: 28996f08-8497-4b5f-8c10-652c18c4812b
	I1024 19:35:34.522144   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:34.522151   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:34.522159   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:34.522168   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:34.522178   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:34 GMT
	I1024 19:35:34.522362   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vjr8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"844852b2-3dbb-4d52-a752-b39021adfc04","resourceVersion":"706","creationTimestamp":"2023-10-24T19:26:43Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:26:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1024 19:35:34.720144   33086 request.go:629] Waited for 197.346738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:35:34.720215   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:35:34.720224   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:34.720239   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:34.720253   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:34.722988   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:34.723006   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:34.723012   33086 round_trippers.go:580]     Audit-Id: a322dded-a402-4465-b0b3-63c2489e36cc
	I1024 19:35:34.723018   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:34.723023   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:34.723045   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:34.723054   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:34.723062   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:34 GMT
	I1024 19:35:34.723167   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m03","uid":"b46ce2c5-5d6c-4894-ad88-10111966a53a","resourceVersion":"871","creationTimestamp":"2023-10-24T19:27:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:27:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I1024 19:35:34.723432   33086 pod_ready.go:92] pod "kube-proxy-vjr8q" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:34.723448   33086 pod_ready.go:81] duration metric: took 400.521468ms waiting for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:34.723456   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:34.919556   33086 request.go:629] Waited for 196.041342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:35:34.919633   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:35:34.919639   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:34.919647   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:34.919683   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:34.922109   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:34.922126   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:34.922133   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:34.922139   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:34 GMT
	I1024 19:35:34.922144   33086 round_trippers.go:580]     Audit-Id: b463bf4d-d6ef-49c0-8211-e4215abaf7f4
	I1024 19:35:34.922149   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:34.922154   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:34.922159   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:34.922318   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-632589","namespace":"kube-system","uid":"e85a7c19-1a25-42f5-81bd-16ed7070ca3c","resourceVersion":"857","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.mirror":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.seen":"2023-10-24T19:24:56.213306721Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1024 19:35:35.119775   33086 request.go:629] Waited for 197.042972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:35.119853   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:35:35.119860   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:35.119871   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:35.119878   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:35.122528   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:35.122546   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:35.122553   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:35.122558   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:35.122564   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:35.122570   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:35.122578   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:35 GMT
	I1024 19:35:35.122583   33086 round_trippers.go:580]     Audit-Id: 8661c054-0287-45d4-8713-ca46501baaf3
	I1024 19:35:35.122869   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1024 19:35:35.123228   33086 pod_ready.go:92] pod "kube-scheduler-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:35:35.123245   33086 pod_ready.go:81] duration metric: took 399.782894ms waiting for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:35:35.123254   33086 pod_ready.go:38] duration metric: took 7.596758418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:35:35.123267   33086 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:35:35.123320   33086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:35.136082   33086 command_runner.go:130] > 1077
	I1024 19:35:35.136388   33086 api_server.go:72] duration metric: took 8.990040437s to wait for apiserver process to appear ...
	I1024 19:35:35.136406   33086 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:35:35.136427   33086 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:35:35.142362   33086 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I1024 19:35:35.142415   33086 round_trippers.go:463] GET https://192.168.39.247:8443/version
	I1024 19:35:35.142422   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:35.142430   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:35.142443   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:35.143359   33086 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1024 19:35:35.143379   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:35.143388   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:35.143400   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:35.143408   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:35.143417   33086 round_trippers.go:580]     Content-Length: 264
	I1024 19:35:35.143429   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:35 GMT
	I1024 19:35:35.143435   33086 round_trippers.go:580]     Audit-Id: d5ba38d9-b3e2-4805-ae2e-457c2ba00a22
	I1024 19:35:35.143440   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:35.143458   33086 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1024 19:35:35.143491   33086 api_server.go:141] control plane version: v1.28.3
	I1024 19:35:35.143505   33086 api_server.go:131] duration metric: took 7.092149ms to wait for apiserver health ...
	I1024 19:35:35.143514   33086 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:35:35.319936   33086 request.go:629] Waited for 176.358468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:35.319990   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:35.319995   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:35.320003   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:35.320009   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:35.324005   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:35:35.324026   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:35.324032   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:35.324038   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:35.324043   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:35 GMT
	I1024 19:35:35.324048   33086 round_trippers.go:580]     Audit-Id: 30dc7464-5ecd-4568-bdc0-8f98c20641a8
	I1024 19:35:35.324053   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:35.324058   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:35.325613   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"875"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81875 chars]
	I1024 19:35:35.327896   33086 system_pods.go:59] 12 kube-system pods found
	I1024 19:35:35.327914   33086 system_pods.go:61] "coredns-5dd5756b68-c5l8s" [20aa782d-e6ed-45ad-b625-556d1a8503c0] Running
	I1024 19:35:35.327920   33086 system_pods.go:61] "etcd-multinode-632589" [a84a9833-e3b8-4148-9ee7-3f4479a10186] Running
	I1024 19:35:35.327926   33086 system_pods.go:61] "kindnet-pwmd9" [6e2f396b-dc71-4dd2-8521-ecce4287f61c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:35.327932   33086 system_pods.go:61] "kindnet-qvkwv" [ec1ea359-8477-4d62-ab29-95a048433575] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:35.327937   33086 system_pods.go:61] "kindnet-xh444" [dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b] Running
	I1024 19:35:35.327941   33086 system_pods.go:61] "kube-apiserver-multinode-632589" [34fcbf72-bf92-477f-8c1c-b0fd908c561d] Running
	I1024 19:35:35.327946   33086 system_pods.go:61] "kube-controller-manager-multinode-632589" [6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6] Running
	I1024 19:35:35.327953   33086 system_pods.go:61] "kube-proxy-6vn7s" [d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b] Running
	I1024 19:35:35.327959   33086 system_pods.go:61] "kube-proxy-gd49s" [a1c573fd-3f4b-4d90-a366-6d859a121185] Running
	I1024 19:35:35.327963   33086 system_pods.go:61] "kube-proxy-vjr8q" [844852b2-3dbb-4d52-a752-b39021adfc04] Running
	I1024 19:35:35.327968   33086 system_pods.go:61] "kube-scheduler-multinode-632589" [e85a7c19-1a25-42f5-81bd-16ed7070ca3c] Running
	I1024 19:35:35.327972   33086 system_pods.go:61] "storage-provisioner" [4023756b-6e38-476d-8dec-90ea2346dc01] Running
	I1024 19:35:35.327983   33086 system_pods.go:74] duration metric: took 184.46056ms to wait for pod list to return data ...
	I1024 19:35:35.327992   33086 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:35:35.519358   33086 request.go:629] Waited for 191.296747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:35:35.519421   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:35:35.519426   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:35.519433   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:35.519442   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:35.522239   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:35.522258   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:35.522265   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:35.522271   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:35.522276   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:35.522282   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:35.522287   33086 round_trippers.go:580]     Content-Length: 261
	I1024 19:35:35.522292   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:35 GMT
	I1024 19:35:35.522306   33086 round_trippers.go:580]     Audit-Id: 0ef7af41-cd42-4f88-bdb3-4ba30c7e4a56
	I1024 19:35:35.522323   33086 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"875"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"44688757-fcd3-49d1-a7b3-5cd59b15336d","resourceVersion":"346","creationTimestamp":"2023-10-24T19:25:09Z"}}]}
	I1024 19:35:35.522468   33086 default_sa.go:45] found service account: "default"
	I1024 19:35:35.522483   33086 default_sa.go:55] duration metric: took 194.482828ms for default service account to be created ...
	I1024 19:35:35.522490   33086 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:35:35.719891   33086 request.go:629] Waited for 197.339249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:35.719964   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:35:35.719973   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:35.719980   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:35.719989   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:35.724556   33086 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:35:35.724580   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:35.724587   33086 round_trippers.go:580]     Audit-Id: 079027ad-53aa-472a-9549-84887094c3f2
	I1024 19:35:35.724592   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:35.724597   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:35.724609   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:35.724616   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:35.724624   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:35 GMT
	I1024 19:35:35.725406   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"875"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81875 chars]
	I1024 19:35:35.727666   33086 system_pods.go:86] 12 kube-system pods found
	I1024 19:35:35.727683   33086 system_pods.go:89] "coredns-5dd5756b68-c5l8s" [20aa782d-e6ed-45ad-b625-556d1a8503c0] Running
	I1024 19:35:35.727688   33086 system_pods.go:89] "etcd-multinode-632589" [a84a9833-e3b8-4148-9ee7-3f4479a10186] Running
	I1024 19:35:35.727694   33086 system_pods.go:89] "kindnet-pwmd9" [6e2f396b-dc71-4dd2-8521-ecce4287f61c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:35.727701   33086 system_pods.go:89] "kindnet-qvkwv" [ec1ea359-8477-4d62-ab29-95a048433575] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1024 19:35:35.727706   33086 system_pods.go:89] "kindnet-xh444" [dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b] Running
	I1024 19:35:35.727710   33086 system_pods.go:89] "kube-apiserver-multinode-632589" [34fcbf72-bf92-477f-8c1c-b0fd908c561d] Running
	I1024 19:35:35.727715   33086 system_pods.go:89] "kube-controller-manager-multinode-632589" [6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6] Running
	I1024 19:35:35.727722   33086 system_pods.go:89] "kube-proxy-6vn7s" [d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b] Running
	I1024 19:35:35.727726   33086 system_pods.go:89] "kube-proxy-gd49s" [a1c573fd-3f4b-4d90-a366-6d859a121185] Running
	I1024 19:35:35.727729   33086 system_pods.go:89] "kube-proxy-vjr8q" [844852b2-3dbb-4d52-a752-b39021adfc04] Running
	I1024 19:35:35.727735   33086 system_pods.go:89] "kube-scheduler-multinode-632589" [e85a7c19-1a25-42f5-81bd-16ed7070ca3c] Running
	I1024 19:35:35.727739   33086 system_pods.go:89] "storage-provisioner" [4023756b-6e38-476d-8dec-90ea2346dc01] Running
	I1024 19:35:35.727744   33086 system_pods.go:126] duration metric: took 205.249569ms to wait for k8s-apps to be running ...
	I1024 19:35:35.727751   33086 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:35:35.727789   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:35:35.741514   33086 system_svc.go:56] duration metric: took 13.753424ms WaitForService to wait for kubelet.
	I1024 19:35:35.741537   33086 kubeadm.go:581] duration metric: took 9.595196129s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:35:35.741552   33086 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:35:35.919964   33086 request.go:629] Waited for 178.341855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes
	I1024 19:35:35.920015   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes
	I1024 19:35:35.920020   33086 round_trippers.go:469] Request Headers:
	I1024 19:35:35.920037   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:35:35.920044   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:35:35.923023   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:35:35.923043   33086 round_trippers.go:577] Response Headers:
	I1024 19:35:35.923051   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:35:35.923057   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:35:35.923062   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:35:35.923067   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:35:35 GMT
	I1024 19:35:35.923072   33086 round_trippers.go:580]     Audit-Id: 6952e4a6-0b76-4df3-8c2d-308313bf2b05
	I1024 19:35:35.923077   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:35:35.923280   33086 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"875"},"items":[{"metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"846","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15075 chars]
	I1024 19:35:35.923839   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:35:35.923857   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:35:35.923867   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:35:35.923871   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:35:35.923874   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:35:35.923878   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:35:35.923885   33086 node_conditions.go:105] duration metric: took 182.329365ms to run NodePressure ...
	I1024 19:35:35.923895   33086 start.go:228] waiting for startup goroutines ...
	I1024 19:35:35.923902   33086 start.go:233] waiting for cluster config update ...
	I1024 19:35:35.923908   33086 start.go:242] writing updated cluster config ...
	I1024 19:35:35.924311   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:35.924385   33086 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:35:35.927526   33086 out.go:177] * Starting worker node multinode-632589-m02 in cluster multinode-632589
	I1024 19:35:35.928954   33086 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:35:35.928985   33086 cache.go:57] Caching tarball of preloaded images
	I1024 19:35:35.929098   33086 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:35:35.929111   33086 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:35:35.929205   33086 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:35:35.929428   33086 start.go:365] acquiring machines lock for multinode-632589-m02: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:35:35.929491   33086 start.go:369] acquired machines lock for "multinode-632589-m02" in 42.888µs
	I1024 19:35:35.929506   33086 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:35:35.929512   33086 fix.go:54] fixHost starting: m02
	I1024 19:35:35.929778   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:35:35.929810   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:35:35.944194   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42517
	I1024 19:35:35.944622   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:35:35.945067   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:35:35.945087   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:35:35.945447   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:35:35.945624   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:35:35.945796   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetState
	I1024 19:35:35.947298   33086 fix.go:102] recreateIfNeeded on multinode-632589-m02: state=Running err=<nil>
	W1024 19:35:35.947314   33086 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:35:35.949235   33086 out.go:177] * Updating the running kvm2 "multinode-632589-m02" VM ...
	I1024 19:35:35.950635   33086 machine.go:88] provisioning docker machine ...
	I1024 19:35:35.950658   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:35:35.950904   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:35:35.951073   33086 buildroot.go:166] provisioning hostname "multinode-632589-m02"
	I1024 19:35:35.951094   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:35:35.951300   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:35:35.954010   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:35.954565   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:35:35.954595   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:35.954708   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:35:35.954889   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:35.955035   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:35.955207   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:35:35.955350   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:35.955769   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:35:35.955786   33086 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-632589-m02 && echo "multinode-632589-m02" | sudo tee /etc/hostname
	I1024 19:35:36.105255   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-632589-m02
	
	I1024 19:35:36.105286   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:35:36.108017   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.108291   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:35:36.108328   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.108455   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:35:36.108645   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:36.108821   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:36.108953   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:35:36.109117   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:36.109515   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:35:36.109540   33086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-632589-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-632589-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-632589-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:35:36.242312   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:35:36.242336   33086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:35:36.242350   33086 buildroot.go:174] setting up certificates
	I1024 19:35:36.242356   33086 provision.go:83] configureAuth start
	I1024 19:35:36.242364   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetMachineName
	I1024 19:35:36.242627   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:35:36.245120   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.245515   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:35:36.245561   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.245747   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:35:36.248003   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.248390   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:35:36.248427   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.248562   33086 provision.go:138] copyHostCerts
	I1024 19:35:36.248592   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:35:36.248625   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:35:36.248636   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:35:36.248725   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:35:36.248808   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:35:36.248826   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:35:36.248833   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:35:36.248858   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:35:36.248900   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:35:36.248935   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:35:36.248941   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:35:36.248962   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:35:36.249009   33086 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.multinode-632589-m02 san=[192.168.39.186 192.168.39.186 localhost 127.0.0.1 minikube multinode-632589-m02]
	I1024 19:35:36.391195   33086 provision.go:172] copyRemoteCerts
	I1024 19:35:36.391245   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:35:36.391265   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:35:36.393649   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.394008   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:35:36.394046   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.394221   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:35:36.394408   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:36.394603   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:35:36.394749   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:35:36.487381   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:35:36.487459   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:35:36.511948   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:35:36.512009   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1024 19:35:36.535204   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:35:36.535264   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:35:36.558346   33086 provision.go:86] duration metric: configureAuth took 315.974636ms
	I1024 19:35:36.558369   33086 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:35:36.558565   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:36.558632   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:35:36.561243   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.561775   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:35:36.561809   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:35:36.562010   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:35:36.562209   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:36.562359   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:35:36.562479   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:35:36.562660   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:36.563257   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:35:36.563296   33086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:37:07.143577   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:37:07.143608   33086 machine.go:91] provisioned docker machine in 1m31.192956563s
	I1024 19:37:07.143621   33086 start.go:300] post-start starting for "multinode-632589-m02" (driver="kvm2")
	I1024 19:37:07.143646   33086 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:37:07.143673   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:37:07.144003   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:37:07.144032   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:37:07.147053   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.147421   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:37:07.147451   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.147622   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:37:07.147801   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:37:07.147979   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:37:07.148138   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:37:07.245451   33086 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:37:07.249961   33086 command_runner.go:130] > NAME=Buildroot
	I1024 19:37:07.249981   33086 command_runner.go:130] > VERSION=2021.02.12-1-g71212f5-dirty
	I1024 19:37:07.249989   33086 command_runner.go:130] > ID=buildroot
	I1024 19:37:07.249997   33086 command_runner.go:130] > VERSION_ID=2021.02.12
	I1024 19:37:07.250004   33086 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1024 19:37:07.250270   33086 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:37:07.250292   33086 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:37:07.250360   33086 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:37:07.250457   33086 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:37:07.250472   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /etc/ssl/certs/162982.pem
	I1024 19:37:07.250575   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:37:07.260316   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:37:07.284116   33086 start.go:303] post-start completed in 140.478181ms
	I1024 19:37:07.284149   33086 fix.go:56] fixHost completed within 1m31.354629981s
	I1024 19:37:07.284174   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:37:07.286856   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.287269   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:37:07.287301   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.287442   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:37:07.287639   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:37:07.287813   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:37:07.287960   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:37:07.288196   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:37:07.288543   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1024 19:37:07.288557   33086 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:37:07.418294   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698176227.406163651
	
	I1024 19:37:07.418318   33086 fix.go:206] guest clock: 1698176227.406163651
	I1024 19:37:07.418329   33086 fix.go:219] Guest: 2023-10-24 19:37:07.406163651 +0000 UTC Remote: 2023-10-24 19:37:07.284153521 +0000 UTC m=+452.397974509 (delta=122.01013ms)
	I1024 19:37:07.418347   33086 fix.go:190] guest clock delta is within tolerance: 122.01013ms
	I1024 19:37:07.418353   33086 start.go:83] releasing machines lock for "multinode-632589-m02", held for 1m31.488852183s
	I1024 19:37:07.418384   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:37:07.418656   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:37:07.421095   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.421467   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:37:07.421516   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.423371   33086 out.go:177] * Found network options:
	I1024 19:37:07.424775   33086 out.go:177]   - NO_PROXY=192.168.39.247
	W1024 19:37:07.426249   33086 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:37:07.426286   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:37:07.426798   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:37:07.426971   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:37:07.427079   33086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:37:07.427116   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	W1024 19:37:07.427180   33086 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:37:07.427244   33086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:37:07.427267   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:37:07.429746   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.429816   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.430125   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:37:07.430152   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.430195   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:37:07.430214   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:07.430324   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:37:07.430451   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:37:07.430525   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:37:07.430628   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:37:07.430694   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:37:07.430760   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:37:07.430808   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:37:07.430868   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:37:07.664814   33086 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:37:07.664857   33086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:37:07.671323   33086 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1024 19:37:07.671358   33086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:37:07.671414   33086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:37:07.679902   33086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 19:37:07.679918   33086 start.go:472] detecting cgroup driver to use...
	I1024 19:37:07.679974   33086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:37:07.693089   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:37:07.704737   33086 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:37:07.704778   33086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:37:07.716846   33086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:37:07.728798   33086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:37:07.861362   33086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:37:07.985584   33086 docker.go:214] disabling docker service ...
	I1024 19:37:07.985654   33086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:37:07.999235   33086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:37:08.012779   33086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:37:08.131584   33086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:37:08.292912   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:37:08.313259   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:37:08.332006   33086 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:37:08.332051   33086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:37:08.332104   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:37:08.342681   33086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:37:08.342751   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:37:08.357870   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:37:08.368034   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:37:08.378948   33086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:37:08.390548   33086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:37:08.399963   33086 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1024 19:37:08.400107   33086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:37:08.409453   33086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:37:08.558988   33086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:37:11.621279   33086 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.06226117s)
	I1024 19:37:11.621317   33086 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:37:11.621360   33086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:37:11.626398   33086 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:37:11.626412   33086 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:37:11.626419   33086 command_runner.go:130] > Device: 16h/22d	Inode: 1259        Links: 1
	I1024 19:37:11.626425   33086 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:37:11.626430   33086 command_runner.go:130] > Access: 2023-10-24 19:37:11.522269052 +0000
	I1024 19:37:11.626436   33086 command_runner.go:130] > Modify: 2023-10-24 19:37:11.522269052 +0000
	I1024 19:37:11.626441   33086 command_runner.go:130] > Change: 2023-10-24 19:37:11.522269052 +0000
	I1024 19:37:11.626445   33086 command_runner.go:130] >  Birth: -
	I1024 19:37:11.626857   33086 start.go:540] Will wait 60s for crictl version
	I1024 19:37:11.626910   33086 ssh_runner.go:195] Run: which crictl
	I1024 19:37:11.630690   33086 command_runner.go:130] > /usr/bin/crictl
	I1024 19:37:11.630755   33086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:37:11.671698   33086 command_runner.go:130] > Version:  0.1.0
	I1024 19:37:11.671719   33086 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:37:11.671726   33086 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1024 19:37:11.671734   33086 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:37:11.671818   33086 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:37:11.671880   33086 ssh_runner.go:195] Run: crio --version
	I1024 19:37:11.720198   33086 command_runner.go:130] > crio version 1.24.1
	I1024 19:37:11.720215   33086 command_runner.go:130] > Version:          1.24.1
	I1024 19:37:11.720222   33086 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:37:11.720227   33086 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:37:11.720236   33086 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:37:11.720241   33086 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:37:11.720245   33086 command_runner.go:130] > Compiler:         gc
	I1024 19:37:11.720249   33086 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:37:11.720256   33086 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:37:11.720263   33086 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:37:11.720268   33086 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:37:11.720273   33086 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:37:11.721905   33086 ssh_runner.go:195] Run: crio --version
	I1024 19:37:11.767310   33086 command_runner.go:130] > crio version 1.24.1
	I1024 19:37:11.767334   33086 command_runner.go:130] > Version:          1.24.1
	I1024 19:37:11.767342   33086 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:37:11.767346   33086 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:37:11.767352   33086 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:37:11.767356   33086 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:37:11.767360   33086 command_runner.go:130] > Compiler:         gc
	I1024 19:37:11.767365   33086 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:37:11.767375   33086 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:37:11.767382   33086 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:37:11.767387   33086 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:37:11.767392   33086 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:37:11.770753   33086 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:37:11.772121   33086 out.go:177]   - env NO_PROXY=192.168.39.247
	I1024 19:37:11.773455   33086 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:37:11.775844   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:11.776223   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:37:11.776249   33086 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:37:11.776406   33086 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:37:11.780562   33086 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1024 19:37:11.780716   33086 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589 for IP: 192.168.39.186
	I1024 19:37:11.780740   33086 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:37:11.780876   33086 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:37:11.780929   33086 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:37:11.780943   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:37:11.780962   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:37:11.780976   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:37:11.780996   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:37:11.781073   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:37:11.781114   33086 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:37:11.781131   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:37:11.781165   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:37:11.781197   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:37:11.781228   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:37:11.781283   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:37:11.781332   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem -> /usr/share/ca-certificates/16298.pem
	I1024 19:37:11.781351   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /usr/share/ca-certificates/162982.pem
	I1024 19:37:11.781372   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:37:11.781740   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:37:11.807926   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:37:11.828805   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:37:11.851594   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:37:11.875896   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:37:11.899856   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:37:11.923200   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:37:11.946914   33086 ssh_runner.go:195] Run: openssl version
	I1024 19:37:11.953762   33086 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1024 19:37:11.953834   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:37:11.965788   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:37:11.970491   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:37:11.970837   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:37:11.970886   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:37:11.976239   33086 command_runner.go:130] > 51391683
	I1024 19:37:11.976577   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:37:11.987040   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:37:11.998737   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:37:12.003321   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:37:12.003343   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:37:12.003381   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:37:12.008757   33086 command_runner.go:130] > 3ec20f2e
	I1024 19:37:12.008806   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:37:12.019278   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:37:12.030806   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:37:12.035056   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:37:12.035309   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:37:12.035345   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:37:12.040700   33086 command_runner.go:130] > b5213941
	I1024 19:37:12.040963   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:37:12.051200   33086 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:37:12.055876   33086 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:37:12.055924   33086 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:37:12.055999   33086 ssh_runner.go:195] Run: crio config
	I1024 19:37:12.121510   33086 command_runner.go:130] ! time="2023-10-24 19:37:12.109530917Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1024 19:37:12.121612   33086 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:37:12.132252   33086 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:37:12.132277   33086 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:37:12.132289   33086 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:37:12.132294   33086 command_runner.go:130] > #
	I1024 19:37:12.132305   33086 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:37:12.132315   33086 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:37:12.132326   33086 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:37:12.132333   33086 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:37:12.132338   33086 command_runner.go:130] > # reload'.
	I1024 19:37:12.132344   33086 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:37:12.132357   33086 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:37:12.132370   33086 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:37:12.132383   33086 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:37:12.132393   33086 command_runner.go:130] > [crio]
	I1024 19:37:12.132405   33086 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:37:12.132415   33086 command_runner.go:130] > # containers images, in this directory.
	I1024 19:37:12.132421   33086 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1024 19:37:12.132429   33086 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:37:12.132440   33086 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1024 19:37:12.132450   33086 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:37:12.132463   33086 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:37:12.132473   33086 command_runner.go:130] > storage_driver = "overlay"
	I1024 19:37:12.132482   33086 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:37:12.132495   33086 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:37:12.132506   33086 command_runner.go:130] > storage_option = [
	I1024 19:37:12.132520   33086 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1024 19:37:12.132529   33086 command_runner.go:130] > ]
	I1024 19:37:12.132540   33086 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:37:12.132551   33086 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:37:12.132560   33086 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:37:12.132572   33086 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:37:12.132584   33086 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:37:12.132596   33086 command_runner.go:130] > # always happen on a node reboot
	I1024 19:37:12.132605   33086 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:37:12.132617   33086 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:37:12.132629   33086 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:37:12.132642   33086 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:37:12.132651   33086 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:37:12.132658   33086 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:37:12.132669   33086 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:37:12.132674   33086 command_runner.go:130] > # internal_wipe = true
	I1024 19:37:12.132682   33086 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:37:12.132688   33086 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:37:12.132695   33086 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:37:12.132701   33086 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:37:12.132714   33086 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:37:12.132724   33086 command_runner.go:130] > [crio.api]
	I1024 19:37:12.132733   33086 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:37:12.132743   33086 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:37:12.132755   33086 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:37:12.132766   33086 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:37:12.132780   33086 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:37:12.132792   33086 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:37:12.132801   33086 command_runner.go:130] > # stream_port = "0"
	I1024 19:37:12.132813   33086 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:37:12.132824   33086 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:37:12.132837   33086 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:37:12.132847   33086 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:37:12.132861   33086 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:37:12.132875   33086 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:37:12.132884   33086 command_runner.go:130] > # minutes.
	I1024 19:37:12.132892   33086 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:37:12.132901   33086 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:37:12.132910   33086 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:37:12.132914   33086 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:37:12.132920   33086 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:37:12.132927   33086 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:37:12.132934   33086 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:37:12.132941   33086 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:37:12.132949   33086 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:37:12.132956   33086 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1024 19:37:12.132963   33086 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:37:12.132970   33086 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1024 19:37:12.132985   33086 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:37:12.132999   33086 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:37:12.133003   33086 command_runner.go:130] > [crio.runtime]
	I1024 19:37:12.133009   33086 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:37:12.133016   33086 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:37:12.133023   33086 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:37:12.133029   33086 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:37:12.133034   33086 command_runner.go:130] > # default_ulimits = [
	I1024 19:37:12.133037   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133044   33086 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:37:12.133050   33086 command_runner.go:130] > # no_pivot = false
	I1024 19:37:12.133055   33086 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:37:12.133064   33086 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:37:12.133069   33086 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:37:12.133077   33086 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:37:12.133083   33086 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:37:12.133091   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:37:12.133097   33086 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1024 19:37:12.133102   33086 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:37:12.133109   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:37:12.133116   33086 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:37:12.133122   33086 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:37:12.133130   33086 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:37:12.133138   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:37:12.133144   33086 command_runner.go:130] > conmon_env = [
	I1024 19:37:12.133150   33086 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1024 19:37:12.133156   33086 command_runner.go:130] > ]
	I1024 19:37:12.133161   33086 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:37:12.133169   33086 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:37:12.133177   33086 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:37:12.133181   33086 command_runner.go:130] > # default_env = [
	I1024 19:37:12.133187   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133192   33086 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:37:12.133199   33086 command_runner.go:130] > # selinux = false
	I1024 19:37:12.133205   33086 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:37:12.133213   33086 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:37:12.133221   33086 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:37:12.133228   33086 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:37:12.133234   33086 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:37:12.133242   33086 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:37:12.133249   33086 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:37:12.133255   33086 command_runner.go:130] > # which might increase security.
	I1024 19:37:12.133260   33086 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1024 19:37:12.133268   33086 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:37:12.133277   33086 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:37:12.133283   33086 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:37:12.133292   33086 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:37:12.133315   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:37:12.133324   33086 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:37:12.133330   33086 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:37:12.133337   33086 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:37:12.133342   33086 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:37:12.133351   33086 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:37:12.133358   33086 command_runner.go:130] > # irqbalance daemon.
	I1024 19:37:12.133363   33086 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:37:12.133372   33086 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:37:12.133379   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:37:12.133383   33086 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:37:12.133391   33086 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:37:12.133397   33086 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:37:12.133403   33086 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:37:12.133410   33086 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:37:12.133418   33086 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:37:12.133426   33086 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:37:12.133432   33086 command_runner.go:130] > # will be added.
	I1024 19:37:12.133436   33086 command_runner.go:130] > # default_capabilities = [
	I1024 19:37:12.133442   33086 command_runner.go:130] > # 	"CHOWN",
	I1024 19:37:12.133447   33086 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:37:12.133453   33086 command_runner.go:130] > # 	"FSETID",
	I1024 19:37:12.133457   33086 command_runner.go:130] > # 	"FOWNER",
	I1024 19:37:12.133463   33086 command_runner.go:130] > # 	"SETGID",
	I1024 19:37:12.133466   33086 command_runner.go:130] > # 	"SETUID",
	I1024 19:37:12.133473   33086 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:37:12.133478   33086 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:37:12.133484   33086 command_runner.go:130] > # 	"KILL",
	I1024 19:37:12.133487   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133496   33086 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:37:12.133502   33086 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:37:12.133509   33086 command_runner.go:130] > # default_sysctls = [
	I1024 19:37:12.133520   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133527   33086 command_runner.go:130] > # List of devices on the host that a
	I1024 19:37:12.133534   33086 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:37:12.133541   33086 command_runner.go:130] > # allowed_devices = [
	I1024 19:37:12.133545   33086 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:37:12.133549   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133558   33086 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:37:12.133568   33086 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:37:12.133575   33086 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:37:12.133593   33086 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:37:12.133600   33086 command_runner.go:130] > # additional_devices = [
	I1024 19:37:12.133603   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133611   33086 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:37:12.133616   33086 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:37:12.133620   33086 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:37:12.133626   33086 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:37:12.133630   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133638   33086 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:37:12.133646   33086 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:37:12.133653   33086 command_runner.go:130] > # Defaults to false.
	I1024 19:37:12.133659   33086 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:37:12.133667   33086 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:37:12.133675   33086 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:37:12.133679   33086 command_runner.go:130] > # hooks_dir = [
	I1024 19:37:12.133686   33086 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:37:12.133691   33086 command_runner.go:130] > # ]
	I1024 19:37:12.133697   33086 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:37:12.133706   33086 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:37:12.133718   33086 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:37:12.133727   33086 command_runner.go:130] > #
	I1024 19:37:12.133740   33086 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:37:12.133753   33086 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:37:12.133766   33086 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:37:12.133774   33086 command_runner.go:130] > #
	I1024 19:37:12.133786   33086 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:37:12.133800   33086 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:37:12.133811   33086 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:37:12.133820   33086 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:37:12.133826   33086 command_runner.go:130] > #
	I1024 19:37:12.133831   33086 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:37:12.133839   33086 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:37:12.133846   33086 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:37:12.133853   33086 command_runner.go:130] > pids_limit = 1024
	I1024 19:37:12.133860   33086 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:37:12.133869   33086 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:37:12.133877   33086 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:37:12.133887   33086 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:37:12.133893   33086 command_runner.go:130] > # log_size_max = -1
	I1024 19:37:12.133901   33086 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:37:12.133907   33086 command_runner.go:130] > # log_to_journald = false
	I1024 19:37:12.133913   33086 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:37:12.133918   33086 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:37:12.133926   33086 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:37:12.133931   33086 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:37:12.133938   33086 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:37:12.133945   33086 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:37:12.133950   33086 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:37:12.133956   33086 command_runner.go:130] > # read_only = false
	I1024 19:37:12.133963   33086 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:37:12.133971   33086 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:37:12.133978   33086 command_runner.go:130] > # live configuration reload.
	I1024 19:37:12.133982   33086 command_runner.go:130] > # log_level = "info"
	I1024 19:37:12.133990   33086 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:37:12.133998   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:37:12.134002   33086 command_runner.go:130] > # log_filter = ""
	I1024 19:37:12.134008   33086 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:37:12.134018   33086 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:37:12.134025   33086 command_runner.go:130] > # separated by comma.
	I1024 19:37:12.134030   33086 command_runner.go:130] > # uid_mappings = ""
	I1024 19:37:12.134038   33086 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:37:12.134044   33086 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:37:12.134051   33086 command_runner.go:130] > # separated by comma.
	I1024 19:37:12.134055   33086 command_runner.go:130] > # gid_mappings = ""
	I1024 19:37:12.134063   33086 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:37:12.134070   33086 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:37:12.134078   33086 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:37:12.134082   33086 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:37:12.134089   33086 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:37:12.134096   33086 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:37:12.134104   33086 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:37:12.134109   33086 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:37:12.134117   33086 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:37:12.134123   33086 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:37:12.134130   33086 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:37:12.134135   33086 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:37:12.134143   33086 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:37:12.134149   33086 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:37:12.134156   33086 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:37:12.134161   33086 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:37:12.134169   33086 command_runner.go:130] > drop_infra_ctr = false
	I1024 19:37:12.134178   33086 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:37:12.134183   33086 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:37:12.134193   33086 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:37:12.134199   33086 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:37:12.134205   33086 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:37:12.134213   33086 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:37:12.134219   33086 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:37:12.134226   33086 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:37:12.134233   33086 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1024 19:37:12.134239   33086 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:37:12.134248   33086 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:37:12.134257   33086 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:37:12.134263   33086 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:37:12.134268   33086 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:37:12.134278   33086 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:37:12.134288   33086 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:37:12.134295   33086 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:37:12.134305   33086 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:37:12.134312   33086 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:37:12.134320   33086 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:37:12.134323   33086 command_runner.go:130] > # ]
	I1024 19:37:12.134332   33086 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:37:12.134340   33086 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:37:12.134347   33086 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:37:12.134356   33086 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:37:12.134361   33086 command_runner.go:130] > #
	I1024 19:37:12.134366   33086 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:37:12.134373   33086 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:37:12.134378   33086 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:37:12.134385   33086 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:37:12.134390   33086 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:37:12.134396   33086 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:37:12.134400   33086 command_runner.go:130] > # Where:
	I1024 19:37:12.134407   33086 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:37:12.134416   33086 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:37:12.134425   33086 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:37:12.134433   33086 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:37:12.134439   33086 command_runner.go:130] > #   in $PATH.
	I1024 19:37:12.134446   33086 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:37:12.134453   33086 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:37:12.134459   33086 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:37:12.134465   33086 command_runner.go:130] > #   state.
	I1024 19:37:12.134471   33086 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:37:12.134480   33086 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:37:12.134488   33086 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:37:12.134496   33086 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:37:12.134504   33086 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:37:12.134519   33086 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:37:12.134526   33086 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:37:12.134533   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:37:12.134542   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:37:12.134549   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:37:12.134557   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:37:12.134567   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:37:12.134575   33086 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:37:12.134584   33086 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:37:12.134593   33086 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:37:12.134601   33086 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:37:12.134605   33086 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:37:12.134613   33086 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1024 19:37:12.134623   33086 command_runner.go:130] > runtime_type = "oci"
	I1024 19:37:12.134633   33086 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:37:12.134643   33086 command_runner.go:130] > runtime_config_path = ""
	I1024 19:37:12.134653   33086 command_runner.go:130] > monitor_path = ""
	I1024 19:37:12.134663   33086 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:37:12.134670   33086 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:37:12.134683   33086 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:37:12.134693   33086 command_runner.go:130] > # running containers
	I1024 19:37:12.134704   33086 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:37:12.134717   33086 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:37:12.134787   33086 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:37:12.134807   33086 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:37:12.134816   33086 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:37:12.134827   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:37:12.134839   33086 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:37:12.134849   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:37:12.134860   33086 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:37:12.134871   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:37:12.134881   33086 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:37:12.134889   33086 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:37:12.134898   33086 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:37:12.134908   33086 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:37:12.134919   33086 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:37:12.134927   33086 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:37:12.134936   33086 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:37:12.134947   33086 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:37:12.134955   33086 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:37:12.134965   33086 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:37:12.134971   33086 command_runner.go:130] > # Example:
	I1024 19:37:12.134976   33086 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:37:12.134983   33086 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:37:12.134989   33086 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:37:12.134997   33086 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:37:12.135003   33086 command_runner.go:130] > # cpuset = 0
	I1024 19:37:12.135008   33086 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:37:12.135013   33086 command_runner.go:130] > # Where:
	I1024 19:37:12.135023   33086 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:37:12.135030   33086 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:37:12.135038   33086 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:37:12.135044   33086 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:37:12.135055   33086 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:37:12.135064   33086 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:37:12.135067   33086 command_runner.go:130] > # 
	I1024 19:37:12.135076   33086 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:37:12.135082   33086 command_runner.go:130] > #
	I1024 19:37:12.135088   33086 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:37:12.135096   33086 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:37:12.135102   33086 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:37:12.135110   33086 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:37:12.135118   33086 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:37:12.135122   33086 command_runner.go:130] > [crio.image]
	I1024 19:37:12.135129   33086 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:37:12.135137   33086 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:37:12.135144   33086 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:37:12.135156   33086 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:37:12.135162   33086 command_runner.go:130] > # global_auth_file = ""
	I1024 19:37:12.135167   33086 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:37:12.135175   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:37:12.135182   33086 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:37:12.135189   33086 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:37:12.135196   33086 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:37:12.135204   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:37:12.135209   33086 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:37:12.135217   33086 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:37:12.135225   33086 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:37:12.135234   33086 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:37:12.135243   33086 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:37:12.135250   33086 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:37:12.135256   33086 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:37:12.135264   33086 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:37:12.135273   33086 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:37:12.135281   33086 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:37:12.135289   33086 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:37:12.135294   33086 command_runner.go:130] > # signature_policy = ""
	I1024 19:37:12.135301   33086 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:37:12.135310   33086 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:37:12.135314   33086 command_runner.go:130] > # changing them here.
	I1024 19:37:12.135319   33086 command_runner.go:130] > # insecure_registries = [
	I1024 19:37:12.135324   33086 command_runner.go:130] > # ]
	I1024 19:37:12.135334   33086 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:37:12.135342   33086 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:37:12.135348   33086 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:37:12.135354   33086 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:37:12.135360   33086 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:37:12.135367   33086 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:37:12.135373   33086 command_runner.go:130] > # CNI plugins.
	I1024 19:37:12.135377   33086 command_runner.go:130] > [crio.network]
	I1024 19:37:12.135385   33086 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:37:12.135393   33086 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:37:12.135400   33086 command_runner.go:130] > # cni_default_network = ""
	I1024 19:37:12.135409   33086 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:37:12.135416   33086 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:37:12.135422   33086 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:37:12.135429   33086 command_runner.go:130] > # plugin_dirs = [
	I1024 19:37:12.135433   33086 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:37:12.135439   33086 command_runner.go:130] > # ]
	I1024 19:37:12.135445   33086 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:37:12.135451   33086 command_runner.go:130] > [crio.metrics]
	I1024 19:37:12.135456   33086 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:37:12.135463   33086 command_runner.go:130] > enable_metrics = true
	I1024 19:37:12.135467   33086 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:37:12.135475   33086 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:37:12.135484   33086 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:37:12.135490   33086 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:37:12.135498   33086 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:37:12.135505   33086 command_runner.go:130] > # metrics_collectors = [
	I1024 19:37:12.135513   33086 command_runner.go:130] > # 	"operations",
	I1024 19:37:12.135520   33086 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:37:12.135527   33086 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:37:12.135533   33086 command_runner.go:130] > # 	"operations_errors",
	I1024 19:37:12.135538   33086 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:37:12.135544   33086 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:37:12.135549   33086 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:37:12.135555   33086 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:37:12.135560   33086 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:37:12.135566   33086 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:37:12.135570   33086 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:37:12.135577   33086 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:37:12.135581   33086 command_runner.go:130] > # 	"containers_oom",
	I1024 19:37:12.135588   33086 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:37:12.135592   33086 command_runner.go:130] > # 	"operations_total",
	I1024 19:37:12.135600   33086 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:37:12.135604   33086 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:37:12.135611   33086 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:37:12.135616   33086 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:37:12.135622   33086 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:37:12.135627   33086 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:37:12.135634   33086 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:37:12.135639   33086 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:37:12.135646   33086 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:37:12.135649   33086 command_runner.go:130] > # ]
	I1024 19:37:12.135655   33086 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:37:12.135662   33086 command_runner.go:130] > # metrics_port = 9090
	I1024 19:37:12.135667   33086 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:37:12.135673   33086 command_runner.go:130] > # metrics_socket = ""
	I1024 19:37:12.135679   33086 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:37:12.135687   33086 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:37:12.135695   33086 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:37:12.135702   33086 command_runner.go:130] > # certificate on any modification event.
	I1024 19:37:12.135706   33086 command_runner.go:130] > # metrics_cert = ""
	I1024 19:37:12.135713   33086 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:37:12.135718   33086 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:37:12.135725   33086 command_runner.go:130] > # metrics_key = ""
	I1024 19:37:12.135731   33086 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:37:12.135738   33086 command_runner.go:130] > [crio.tracing]
	I1024 19:37:12.135743   33086 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:37:12.135749   33086 command_runner.go:130] > # enable_tracing = false
	I1024 19:37:12.135755   33086 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:37:12.135761   33086 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:37:12.135767   33086 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:37:12.135774   33086 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:37:12.135780   33086 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:37:12.135787   33086 command_runner.go:130] > [crio.stats]
	I1024 19:37:12.135793   33086 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:37:12.135800   33086 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:37:12.135807   33086 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:37:12.135860   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:37:12.135871   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:37:12.135882   33086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:37:12.135908   33086 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-632589 NodeName:multinode-632589-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:37:12.136030   33086 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-632589-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:37:12.136083   33086 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-632589-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:37:12.136129   33086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:37:12.146349   33086 command_runner.go:130] > kubeadm
	I1024 19:37:12.146368   33086 command_runner.go:130] > kubectl
	I1024 19:37:12.146375   33086 command_runner.go:130] > kubelet
	I1024 19:37:12.146568   33086 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:37:12.146626   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1024 19:37:12.155558   33086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1024 19:37:12.171628   33086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:37:12.187268   33086 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I1024 19:37:12.191380   33086 command_runner.go:130] > 192.168.39.247	control-plane.minikube.internal
	I1024 19:37:12.191519   33086 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:37:12.191782   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:37:12.191947   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:37:12.191995   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:37:12.206152   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I1024 19:37:12.206577   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:37:12.207058   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:37:12.207082   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:37:12.207365   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:37:12.207544   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:37:12.207695   33086 start.go:304] JoinCluster: &{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:37:12.207795   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1024 19:37:12.207807   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:37:12.210462   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:37:12.210880   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:37:12.210910   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:37:12.211055   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:37:12.211248   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:37:12.211385   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:37:12.211515   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:37:12.388455   33086 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token i001b6.uerfveflql3dlcvo --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:37:12.388513   33086 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:37:12.388566   33086 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:37:12.388953   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:37:12.389005   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:37:12.403300   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1024 19:37:12.403731   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:37:12.404225   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:37:12.404247   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:37:12.404561   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:37:12.404725   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:37:12.404927   33086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-632589-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1024 19:37:12.404949   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:37:12.407711   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:37:12.408169   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:37:12.408202   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:37:12.408338   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:37:12.408500   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:37:12.408670   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:37:12.408790   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:37:12.559498   33086 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1024 19:37:12.616718   33086 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qvkwv, kube-system/kube-proxy-6vn7s
	I1024 19:37:15.639992   33086 command_runner.go:130] > node/multinode-632589-m02 cordoned
	I1024 19:37:15.640029   33086 command_runner.go:130] > pod "busybox-5bc68d56bd-wrmmm" has DeletionTimestamp older than 1 seconds, skipping
	I1024 19:37:15.640039   33086 command_runner.go:130] > node/multinode-632589-m02 drained
	I1024 19:37:15.640066   33086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-632589-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.235114408s)
	I1024 19:37:15.640083   33086 node.go:108] successfully drained node "m02"
	I1024 19:37:15.640403   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:37:15.640612   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:37:15.640995   33086 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1024 19:37:15.641048   33086 round_trippers.go:463] DELETE https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:15.641056   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:15.641063   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:15.641070   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:15.641077   33086 round_trippers.go:473]     Content-Type: application/json
	I1024 19:37:15.653908   33086 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1024 19:37:15.653933   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:15.653943   33086 round_trippers.go:580]     Content-Length: 171
	I1024 19:37:15.653950   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:15 GMT
	I1024 19:37:15.653957   33086 round_trippers.go:580]     Audit-Id: dafb9ccc-2dc1-4ca4-961f-a46cb230ac73
	I1024 19:37:15.653965   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:15.653972   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:15.653979   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:15.653993   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:15.654041   33086 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-632589-m02","kind":"nodes","uid":"fd273ad2-1efc-48f7-8a20-c8902dff1484"}}
	I1024 19:37:15.654087   33086 node.go:124] successfully deleted node "m02"
	I1024 19:37:15.654097   33086 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:37:15.654121   33086 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:37:15.654142   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i001b6.uerfveflql3dlcvo --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-632589-m02"
	I1024 19:37:15.704121   33086 command_runner.go:130] ! W1024 19:37:15.691861    2616 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1024 19:37:15.704170   33086 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1024 19:37:15.848211   33086 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1024 19:37:15.848236   33086 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1024 19:37:16.593175   33086 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 19:37:16.593205   33086 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1024 19:37:16.593215   33086 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1024 19:37:16.593227   33086 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:37:16.593237   33086 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:37:16.593245   33086 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:37:16.593259   33086 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1024 19:37:16.593271   33086 command_runner.go:130] > This node has joined the cluster:
	I1024 19:37:16.593285   33086 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1024 19:37:16.593312   33086 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1024 19:37:16.593327   33086 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1024 19:37:16.593741   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1024 19:37:16.898327   33086 start.go:306] JoinCluster complete in 4.69062578s
	I1024 19:37:16.898354   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:37:16.898362   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:37:16.898414   33086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:37:16.903800   33086 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:37:16.903818   33086 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1024 19:37:16.903825   33086 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1024 19:37:16.903831   33086 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:37:16.903840   33086 command_runner.go:130] > Access: 2023-10-24 19:34:45.736816710 +0000
	I1024 19:37:16.903849   33086 command_runner.go:130] > Modify: 2023-10-16 21:25:26.000000000 +0000
	I1024 19:37:16.903858   33086 command_runner.go:130] > Change: 2023-10-24 19:34:43.720816710 +0000
	I1024 19:37:16.903864   33086 command_runner.go:130] >  Birth: -
	I1024 19:37:16.903916   33086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:37:16.903929   33086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:37:16.922296   33086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:37:17.251559   33086 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:37:17.260618   33086 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:37:17.263063   33086 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1024 19:37:17.276431   33086 command_runner.go:130] > daemonset.apps/kindnet configured
	I1024 19:37:17.279627   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:37:17.279848   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:37:17.280179   33086 round_trippers.go:463] GET https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:37:17.280192   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.280200   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.280205   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.282893   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:37:17.282909   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.282915   33086 round_trippers.go:580]     Audit-Id: 22d62527-cfc6-476b-9c1d-374689b3e16b
	I1024 19:37:17.282921   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.282926   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.282931   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.282937   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.282945   33086 round_trippers.go:580]     Content-Length: 291
	I1024 19:37:17.282950   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.283121   33086 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"875","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1024 19:37:17.283219   33086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-632589" context rescaled to 1 replicas
	I1024 19:37:17.283251   33086 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:37:17.284885   33086 out.go:177] * Verifying Kubernetes components...
	I1024 19:37:17.286090   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:37:17.300335   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:37:17.300616   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:37:17.300877   33086 node_ready.go:35] waiting up to 6m0s for node "multinode-632589-m02" to be "Ready" ...
	I1024 19:37:17.300948   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:17.300959   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.300971   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.300981   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.304475   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:17.304490   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.304497   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.304503   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.304508   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.304513   33086 round_trippers.go:580]     Audit-Id: 47f36057-8604-4725-8bd2-5b72c5117013
	I1024 19:37:17.304517   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.304522   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.305170   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"f34f53a3-bdef-415c-99af-e8304feacde1","resourceVersion":"1015","creationTimestamp":"2023-10-24T19:37:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1024 19:37:17.305429   33086 node_ready.go:49] node "multinode-632589-m02" has status "Ready":"True"
	I1024 19:37:17.305443   33086 node_ready.go:38] duration metric: took 4.551589ms waiting for node "multinode-632589-m02" to be "Ready" ...
	I1024 19:37:17.305453   33086 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:37:17.305515   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:37:17.305525   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.305533   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.305545   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.309286   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:17.309307   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.309314   33086 round_trippers.go:580]     Audit-Id: 67d6cc02-041d-425d-bbec-04160e8bac56
	I1024 19:37:17.309319   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.309324   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.309329   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.309334   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.309339   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.311378   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1021"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82235 chars]
	I1024 19:37:17.313674   33086 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.313732   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:37:17.313743   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.313754   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.313764   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.315562   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:37:17.315575   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.315584   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.315592   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.315601   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.315610   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.315619   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.315627   33086 round_trippers.go:580]     Audit-Id: 2c7a3f67-2045-48f4-afbb-d00ee8590d55
	I1024 19:37:17.315763   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1024 19:37:17.316137   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:17.316147   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.316154   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.316160   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.317799   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:37:17.317810   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.317816   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.317821   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.317826   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.317831   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.317836   33086 round_trippers.go:580]     Audit-Id: a0163bdd-3736-4c21-896d-b13ca19af594
	I1024 19:37:17.317841   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.318133   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:37:17.318401   33086 pod_ready.go:92] pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:17.318413   33086 pod_ready.go:81] duration metric: took 4.723417ms waiting for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.318420   33086 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.318481   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-632589
	I1024 19:37:17.318490   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.318496   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.318502   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.320229   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:37:17.320242   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.320248   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.320254   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.320259   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.320264   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.320269   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.320275   33086 round_trippers.go:580]     Audit-Id: db3891e7-a6dd-465e-b193-54c975be022a
	I1024 19:37:17.320414   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"849","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1024 19:37:17.320781   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:17.320789   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.320796   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.320806   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.325888   33086 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:37:17.325901   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.325911   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.325919   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.325927   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.325934   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.325942   33086 round_trippers.go:580]     Audit-Id: 33b3d99f-81bc-431c-8d78-8247d5878d85
	I1024 19:37:17.325967   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.326200   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:37:17.326471   33086 pod_ready.go:92] pod "etcd-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:17.326483   33086 pod_ready.go:81] duration metric: took 8.057509ms waiting for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.326496   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.326547   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:37:17.326556   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.326563   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.326569   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.328281   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:37:17.328292   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.328298   33086 round_trippers.go:580]     Audit-Id: ca8a8073-2996-48bd-befb-d6a8be4f58ee
	I1024 19:37:17.328304   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.328312   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.328319   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.328327   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.328339   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.328525   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"868","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1024 19:37:17.328846   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:17.328858   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.328865   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.328870   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.331001   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:37:17.331018   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.331024   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.331029   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.331034   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.331042   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.331050   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.331066   33086 round_trippers.go:580]     Audit-Id: 2b57c68b-216a-4325-8016-a23df71130b5
	I1024 19:37:17.331252   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:37:17.331541   33086 pod_ready.go:92] pod "kube-apiserver-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:17.331553   33086 pod_ready.go:81] duration metric: took 5.051541ms waiting for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.331560   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.331604   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-632589
	I1024 19:37:17.331615   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.331627   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.331638   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.333447   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:37:17.333459   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.333468   33086 round_trippers.go:580]     Audit-Id: 76ed8fa7-d8d8-44be-9e50-12d79d17236e
	I1024 19:37:17.333477   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.333484   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.333492   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.333503   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.333516   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.333733   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-632589","namespace":"kube-system","uid":"6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6","resourceVersion":"850","creationTimestamp":"2023-10-24T19:24:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.mirror":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.seen":"2023-10-24T19:24:47.530352200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1024 19:37:17.334034   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:17.334044   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.334051   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.334056   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.335892   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:37:17.335904   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.335910   33086 round_trippers.go:580]     Audit-Id: 48b67d11-3afd-4b9b-b107-1b4f3b27a7c8
	I1024 19:37:17.335916   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.335921   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.335927   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.335936   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.335944   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.336466   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:37:17.336693   33086 pod_ready.go:92] pod "kube-controller-manager-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:17.336704   33086 pod_ready.go:81] duration metric: took 5.139573ms waiting for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.336711   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:17.501225   33086 request.go:629] Waited for 164.45111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:37:17.501288   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:37:17.501309   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.501322   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.501332   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.504656   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:17.504679   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.504687   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.504693   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.504700   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.504706   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.504715   33086 round_trippers.go:580]     Audit-Id: c53033c9-d2b6-454b-8d58-5878acd65fed
	I1024 19:37:17.504724   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.504952   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"1019","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5886 chars]
	I1024 19:37:17.701784   33086 request.go:629] Waited for 196.361313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:17.701852   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:17.701859   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.701872   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.701881   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.705210   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:17.705228   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.705235   33086 round_trippers.go:580]     Audit-Id: e9573b01-6bad-487a-80cd-eaf1d812b1de
	I1024 19:37:17.705240   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.705247   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.705255   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.705263   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.705271   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.705448   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"f34f53a3-bdef-415c-99af-e8304feacde1","resourceVersion":"1015","creationTimestamp":"2023-10-24T19:37:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1024 19:37:17.901069   33086 request.go:629] Waited for 195.241857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:37:17.901168   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:37:17.901182   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:17.901209   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:17.901223   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:17.904731   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:17.904748   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:17.904758   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:17.904765   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:17.904773   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:17.904781   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:17.904790   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:17 GMT
	I1024 19:37:17.904799   33086 round_trippers.go:580]     Audit-Id: 23871f6b-ef4b-4663-8a63-0f5494f1e5fb
	I1024 19:37:17.905419   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"1019","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5886 chars]
	I1024 19:37:18.101055   33086 request.go:629] Waited for 195.196604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:18.101160   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:18.101173   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:18.101186   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:18.101201   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:18.103374   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:37:18.103390   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:18.103398   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:18 GMT
	I1024 19:37:18.103403   33086 round_trippers.go:580]     Audit-Id: 26f3c9a4-a0a4-4716-9b8e-e315c22f1539
	I1024 19:37:18.103408   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:18.103413   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:18.103419   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:18.103424   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:18.103644   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"f34f53a3-bdef-415c-99af-e8304feacde1","resourceVersion":"1015","creationTimestamp":"2023-10-24T19:37:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1024 19:37:18.604628   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:37:18.604648   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:18.604656   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:18.604662   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:18.609821   33086 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:37:18.609841   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:18.609851   33086 round_trippers.go:580]     Audit-Id: 83ca5656-769d-4a9b-a1b7-f51cd4a10eea
	I1024 19:37:18.609859   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:18.609868   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:18.609877   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:18.609892   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:18.609905   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:18 GMT
	I1024 19:37:18.610630   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"1030","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1024 19:37:18.611025   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:37:18.611039   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:18.611049   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:18.611058   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:18.613205   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:37:18.613219   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:18.613228   33086 round_trippers.go:580]     Audit-Id: 317b041b-fd7f-48b4-853e-cf9b3101f100
	I1024 19:37:18.613237   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:18.613244   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:18.613256   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:18.613269   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:18.613279   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:18 GMT
	I1024 19:37:18.613381   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"f34f53a3-bdef-415c-99af-e8304feacde1","resourceVersion":"1015","creationTimestamp":"2023-10-24T19:37:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1024 19:37:18.613614   33086 pod_ready.go:92] pod "kube-proxy-6vn7s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:18.613631   33086 pod_ready.go:81] duration metric: took 1.276913992s waiting for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:18.613645   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:18.702020   33086 request.go:629] Waited for 88.316013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:37:18.702085   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:37:18.702090   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:18.702098   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:18.702104   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:18.705189   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:18.705214   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:18.705224   33086 round_trippers.go:580]     Audit-Id: 16a80b5c-cc95-4e3e-a0d9-0b8c5c4642ae
	I1024 19:37:18.705232   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:18.705239   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:18.705247   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:18.705255   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:18.705264   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:18 GMT
	I1024 19:37:18.705398   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gd49s","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1c573fd-3f4b-4d90-a366-6d859a121185","resourceVersion":"834","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:37:18.901073   33086 request.go:629] Waited for 195.279585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:18.901127   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:18.901132   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:18.901140   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:18.901145   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:18.903975   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:37:18.903996   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:18.904007   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:18.904014   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:18 GMT
	I1024 19:37:18.904022   33086 round_trippers.go:580]     Audit-Id: 81943b06-d1b9-4512-8fd5-d0c73d1a9d5a
	I1024 19:37:18.904030   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:18.904039   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:18.904052   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:18.904394   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:37:18.904720   33086 pod_ready.go:92] pod "kube-proxy-gd49s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:18.904735   33086 pod_ready.go:81] duration metric: took 291.079335ms waiting for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:18.904747   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:19.101089   33086 request.go:629] Waited for 196.278841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:37:19.101148   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:37:19.101154   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:19.101161   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:19.101167   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:19.104305   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:19.104325   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:19.104338   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:19 GMT
	I1024 19:37:19.104346   33086 round_trippers.go:580]     Audit-Id: 55df7a53-2ed8-4c80-8d55-6a8164c51c5d
	I1024 19:37:19.104354   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:19.104361   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:19.104369   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:19.104375   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:19.104729   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vjr8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"844852b2-3dbb-4d52-a752-b39021adfc04","resourceVersion":"706","creationTimestamp":"2023-10-24T19:26:43Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:26:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5521 chars]
	I1024 19:37:19.301510   33086 request.go:629] Waited for 196.333993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:37:19.301573   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:37:19.301580   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:19.301591   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:19.301601   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:19.304726   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:19.304743   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:19.304750   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:19 GMT
	I1024 19:37:19.304756   33086 round_trippers.go:580]     Audit-Id: 03e13470-0e0f-4798-9509-35f2f6de75c4
	I1024 19:37:19.304761   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:19.304766   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:19.304775   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:19.304780   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:19.305516   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m03","uid":"b46ce2c5-5d6c-4894-ad88-10111966a53a","resourceVersion":"871","creationTimestamp":"2023-10-24T19:27:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:27:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I1024 19:37:19.305770   33086 pod_ready.go:92] pod "kube-proxy-vjr8q" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:19.305786   33086 pod_ready.go:81] duration metric: took 401.030281ms waiting for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:19.305798   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:19.501175   33086 request.go:629] Waited for 195.299544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:37:19.501252   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:37:19.501263   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:19.501274   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:19.501286   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:19.504113   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:37:19.504136   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:19.504145   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:19 GMT
	I1024 19:37:19.504153   33086 round_trippers.go:580]     Audit-Id: eee2c85f-d4da-49ba-b2c4-d0a03cd26568
	I1024 19:37:19.504161   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:19.504171   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:19.504187   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:19.504195   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:19.504600   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-632589","namespace":"kube-system","uid":"e85a7c19-1a25-42f5-81bd-16ed7070ca3c","resourceVersion":"857","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.mirror":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.seen":"2023-10-24T19:24:56.213306721Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1024 19:37:19.701288   33086 request.go:629] Waited for 196.362417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:19.701366   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:37:19.701370   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:19.701378   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:19.701387   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:19.704527   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:37:19.704547   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:19.704560   33086 round_trippers.go:580]     Audit-Id: 3b9cc406-e079-4c34-85f6-3e69463a14cb
	I1024 19:37:19.704567   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:19.704574   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:19.704582   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:19.704591   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:19.704600   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:19 GMT
	I1024 19:37:19.705093   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:37:19.705397   33086 pod_ready.go:92] pod "kube-scheduler-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:37:19.705413   33086 pod_ready.go:81] duration metric: took 399.605462ms waiting for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:37:19.705425   33086 pod_ready.go:38] duration metric: took 2.399957943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:37:19.705449   33086 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:37:19.705499   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:37:19.719741   33086 system_svc.go:56] duration metric: took 14.289053ms WaitForService to wait for kubelet.
	I1024 19:37:19.719764   33086 kubeadm.go:581] duration metric: took 2.436486788s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:37:19.719787   33086 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:37:19.901133   33086 request.go:629] Waited for 181.287114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes
	I1024 19:37:19.901206   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes
	I1024 19:37:19.901219   33086 round_trippers.go:469] Request Headers:
	I1024 19:37:19.901226   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:37:19.901233   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:37:19.908276   33086 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1024 19:37:19.908298   33086 round_trippers.go:577] Response Headers:
	I1024 19:37:19.908305   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:37:19.908311   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:37:19.908316   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:37:19 GMT
	I1024 19:37:19.908321   33086 round_trippers.go:580]     Audit-Id: acc7a1bf-a25a-43a1-9239-b462857ba3bf
	I1024 19:37:19.908326   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:37:19.908331   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:37:19.909718   33086 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1032"},"items":[{"metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15105 chars]
	I1024 19:37:19.910424   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:37:19.910446   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:37:19.910458   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:37:19.910464   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:37:19.910470   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:37:19.910473   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:37:19.910478   33086 node_conditions.go:105] duration metric: took 190.686116ms to run NodePressure ...
	I1024 19:37:19.910496   33086 start.go:228] waiting for startup goroutines ...
	I1024 19:37:19.910530   33086 start.go:242] writing updated cluster config ...
	I1024 19:37:19.911063   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:37:19.911178   33086 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:37:19.914459   33086 out.go:177] * Starting worker node multinode-632589-m03 in cluster multinode-632589
	I1024 19:37:19.915710   33086 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:37:19.915732   33086 cache.go:57] Caching tarball of preloaded images
	I1024 19:37:19.915823   33086 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:37:19.915834   33086 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:37:19.915907   33086 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/config.json ...
	I1024 19:37:19.916050   33086 start.go:365] acquiring machines lock for multinode-632589-m03: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:37:19.916089   33086 start.go:369] acquired machines lock for "multinode-632589-m03" in 21.221µs
	I1024 19:37:19.916100   33086 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:37:19.916105   33086 fix.go:54] fixHost starting: m03
	I1024 19:37:19.916323   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:37:19.916350   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:37:19.930329   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I1024 19:37:19.930711   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:37:19.931144   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:37:19.931167   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:37:19.931449   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:37:19.931600   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:37:19.931730   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetState
	I1024 19:37:19.933265   33086 fix.go:102] recreateIfNeeded on multinode-632589-m03: state=Running err=<nil>
	W1024 19:37:19.933283   33086 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:37:19.934865   33086 out.go:177] * Updating the running kvm2 "multinode-632589-m03" VM ...
	I1024 19:37:19.936055   33086 machine.go:88] provisioning docker machine ...
	I1024 19:37:19.936071   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:37:19.936261   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetMachineName
	I1024 19:37:19.936417   33086 buildroot.go:166] provisioning hostname "multinode-632589-m03"
	I1024 19:37:19.936432   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetMachineName
	I1024 19:37:19.936576   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:37:19.938930   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:19.939312   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:37:19.939341   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:19.939458   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:37:19.939615   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:19.939737   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:19.939831   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:37:19.939946   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:37:19.940252   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I1024 19:37:19.940270   33086 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-632589-m03 && echo "multinode-632589-m03" | sudo tee /etc/hostname
	I1024 19:37:20.076049   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-632589-m03
	
	I1024 19:37:20.076078   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:37:20.078802   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.079128   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:37:20.079156   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.079334   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:37:20.079500   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:20.079678   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:20.079823   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:37:20.079995   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:37:20.080298   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I1024 19:37:20.080317   33086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-632589-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-632589-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-632589-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:37:20.197951   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:37:20.197980   33086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:37:20.197998   33086 buildroot.go:174] setting up certificates
	I1024 19:37:20.198008   33086 provision.go:83] configureAuth start
	I1024 19:37:20.198021   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetMachineName
	I1024 19:37:20.198274   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetIP
	I1024 19:37:20.200970   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.201405   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:37:20.201442   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.201599   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:37:20.204273   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.204592   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:37:20.204623   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.204739   33086 provision.go:138] copyHostCerts
	I1024 19:37:20.204776   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:37:20.204806   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:37:20.204814   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:37:20.204881   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:37:20.204945   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:37:20.204961   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:37:20.204967   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:37:20.204989   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:37:20.205031   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:37:20.205046   33086 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:37:20.205053   33086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:37:20.205071   33086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:37:20.205113   33086 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.multinode-632589-m03 san=[192.168.39.13 192.168.39.13 localhost 127.0.0.1 minikube multinode-632589-m03]
	I1024 19:37:20.473735   33086 provision.go:172] copyRemoteCerts
	I1024 19:37:20.473782   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:37:20.473805   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:37:20.476572   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.476941   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:37:20.476976   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.477129   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:37:20.477376   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:20.477538   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:37:20.477696   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m03/id_rsa Username:docker}
	I1024 19:37:20.567954   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:37:20.568036   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:37:20.591583   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:37:20.591654   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1024 19:37:20.616353   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:37:20.616414   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:37:20.641066   33086 provision.go:86] duration metric: configureAuth took 443.045484ms
	I1024 19:37:20.641092   33086 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:37:20.641331   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:37:20.641400   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:37:20.643583   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.643988   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:37:20.644041   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:37:20.644198   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:37:20.644401   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:20.644585   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:37:20.644734   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:37:20.644887   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:37:20.645185   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I1024 19:37:20.645200   33086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:38:51.267171   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:38:51.267207   33086 machine.go:91] provisioned docker machine in 1m31.331139758s
	I1024 19:38:51.267219   33086 start.go:300] post-start starting for "multinode-632589-m03" (driver="kvm2")
	I1024 19:38:51.267232   33086 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:38:51.267259   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:38:51.267596   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:38:51.267632   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:38:51.270737   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.271078   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:38:51.271102   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.271235   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:38:51.271429   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:38:51.271595   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:38:51.271751   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m03/id_rsa Username:docker}
	I1024 19:38:51.363523   33086 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:38:51.367550   33086 command_runner.go:130] > NAME=Buildroot
	I1024 19:38:51.367567   33086 command_runner.go:130] > VERSION=2021.02.12-1-g71212f5-dirty
	I1024 19:38:51.367573   33086 command_runner.go:130] > ID=buildroot
	I1024 19:38:51.367580   33086 command_runner.go:130] > VERSION_ID=2021.02.12
	I1024 19:38:51.367587   33086 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1024 19:38:51.367620   33086 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:38:51.367636   33086 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:38:51.367704   33086 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:38:51.367794   33086 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:38:51.367806   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /etc/ssl/certs/162982.pem
	I1024 19:38:51.367903   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:38:51.377316   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:38:51.400181   33086 start.go:303] post-start completed in 132.949528ms
	I1024 19:38:51.400201   33086 fix.go:56] fixHost completed within 1m31.484095091s
	I1024 19:38:51.400225   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:38:51.402963   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.403338   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:38:51.403368   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.403530   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:38:51.403747   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:38:51.403965   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:38:51.404155   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:38:51.404334   33086 main.go:141] libmachine: Using SSH client type: native
	I1024 19:38:51.404644   33086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I1024 19:38:51.404657   33086 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:38:51.521578   33086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698176331.513906145
	
	I1024 19:38:51.521601   33086 fix.go:206] guest clock: 1698176331.513906145
	I1024 19:38:51.521610   33086 fix.go:219] Guest: 2023-10-24 19:38:51.513906145 +0000 UTC Remote: 2023-10-24 19:38:51.400206128 +0000 UTC m=+556.514027118 (delta=113.700017ms)
	I1024 19:38:51.521628   33086 fix.go:190] guest clock delta is within tolerance: 113.700017ms
	I1024 19:38:51.521635   33086 start.go:83] releasing machines lock for "multinode-632589-m03", held for 1m31.605538619s
	I1024 19:38:51.521658   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:38:51.521907   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetIP
	I1024 19:38:51.524340   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.524738   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:38:51.524772   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.526902   33086 out.go:177] * Found network options:
	I1024 19:38:51.528448   33086 out.go:177]   - NO_PROXY=192.168.39.247,192.168.39.186
	W1024 19:38:51.529864   33086 proxy.go:119] fail to check proxy env: Error ip not in block
	W1024 19:38:51.529882   33086 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:38:51.529898   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:38:51.530423   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:38:51.530597   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .DriverName
	I1024 19:38:51.530692   33086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:38:51.530719   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	W1024 19:38:51.530793   33086 proxy.go:119] fail to check proxy env: Error ip not in block
	W1024 19:38:51.530815   33086 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:38:51.530872   33086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:38:51.530890   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHHostname
	I1024 19:38:51.533417   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.533726   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.533810   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:38:51.533843   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.533969   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:38:51.534133   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:38:51.534323   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:38:51.534327   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:38:51.534354   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:51.534480   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHPort
	I1024 19:38:51.534474   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m03/id_rsa Username:docker}
	I1024 19:38:51.534628   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHKeyPath
	I1024 19:38:51.534745   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetSSHUsername
	I1024 19:38:51.534858   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m03/id_rsa Username:docker}
	I1024 19:38:51.768353   33086 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:38:51.768409   33086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:38:51.774008   33086 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1024 19:38:51.774140   33086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:38:51.774205   33086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:38:51.783216   33086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 19:38:51.783236   33086 start.go:472] detecting cgroup driver to use...
	I1024 19:38:51.783304   33086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:38:51.797445   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:38:51.809670   33086 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:38:51.809724   33086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:38:51.823573   33086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:38:51.835988   33086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:38:51.963236   33086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:38:52.081661   33086 docker.go:214] disabling docker service ...
	I1024 19:38:52.081722   33086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:38:52.096221   33086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:38:52.109743   33086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:38:52.225485   33086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:38:52.350484   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:38:52.363814   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:38:52.381649   33086 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:38:52.381684   33086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:38:52.381733   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:52.390984   33086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:38:52.391040   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:52.400659   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:52.409576   33086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:52.419273   33086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:38:52.428445   33086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:38:52.436309   33086 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1024 19:38:52.436358   33086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:38:52.444179   33086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:38:52.586133   33086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:38:52.811503   33086 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:38:52.811567   33086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:38:52.817695   33086 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:38:52.817717   33086 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:38:52.817727   33086 command_runner.go:130] > Device: 16h/22d	Inode: 1176        Links: 1
	I1024 19:38:52.817737   33086 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:38:52.817746   33086 command_runner.go:130] > Access: 2023-10-24 19:38:52.742831742 +0000
	I1024 19:38:52.817762   33086 command_runner.go:130] > Modify: 2023-10-24 19:38:52.742831742 +0000
	I1024 19:38:52.817772   33086 command_runner.go:130] > Change: 2023-10-24 19:38:52.742831742 +0000
	I1024 19:38:52.817782   33086 command_runner.go:130] >  Birth: -
	I1024 19:38:52.818000   33086 start.go:540] Will wait 60s for crictl version
	I1024 19:38:52.818053   33086 ssh_runner.go:195] Run: which crictl
	I1024 19:38:52.821708   33086 command_runner.go:130] > /usr/bin/crictl
	I1024 19:38:52.821767   33086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:38:52.858248   33086 command_runner.go:130] > Version:  0.1.0
	I1024 19:38:52.858265   33086 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:38:52.858272   33086 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1024 19:38:52.858280   33086 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:38:52.858309   33086 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:38:52.858365   33086 ssh_runner.go:195] Run: crio --version
	I1024 19:38:52.905621   33086 command_runner.go:130] > crio version 1.24.1
	I1024 19:38:52.905641   33086 command_runner.go:130] > Version:          1.24.1
	I1024 19:38:52.905651   33086 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:38:52.905658   33086 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:38:52.905666   33086 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:38:52.905672   33086 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:38:52.905679   33086 command_runner.go:130] > Compiler:         gc
	I1024 19:38:52.905686   33086 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:38:52.905694   33086 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:38:52.905707   33086 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:38:52.905719   33086 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:38:52.905728   33086 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:38:52.907377   33086 ssh_runner.go:195] Run: crio --version
	I1024 19:38:52.949695   33086 command_runner.go:130] > crio version 1.24.1
	I1024 19:38:52.949721   33086 command_runner.go:130] > Version:          1.24.1
	I1024 19:38:52.949732   33086 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1024 19:38:52.949740   33086 command_runner.go:130] > GitTreeState:     dirty
	I1024 19:38:52.949749   33086 command_runner.go:130] > BuildDate:        2023-10-16T21:18:20Z
	I1024 19:38:52.949756   33086 command_runner.go:130] > GoVersion:        go1.19.9
	I1024 19:38:52.949763   33086 command_runner.go:130] > Compiler:         gc
	I1024 19:38:52.949771   33086 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:38:52.949779   33086 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:38:52.949793   33086 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:38:52.949803   33086 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:38:52.949811   33086 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:38:52.954125   33086 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 19:38:52.955538   33086 out.go:177]   - env NO_PROXY=192.168.39.247
	I1024 19:38:52.956957   33086 out.go:177]   - env NO_PROXY=192.168.39.247,192.168.39.186
	I1024 19:38:52.958736   33086 main.go:141] libmachine: (multinode-632589-m03) Calling .GetIP
	I1024 19:38:52.961380   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:52.961721   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:9f:44", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:27:17 +0000 UTC Type:0 Mac:52:54:00:e8:9f:44 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-632589-m03 Clientid:01:52:54:00:e8:9f:44}
	I1024 19:38:52.961779   33086 main.go:141] libmachine: (multinode-632589-m03) DBG | domain multinode-632589-m03 has defined IP address 192.168.39.13 and MAC address 52:54:00:e8:9f:44 in network mk-multinode-632589
	I1024 19:38:52.961917   33086 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:38:52.966161   33086 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1024 19:38:52.966206   33086 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589 for IP: 192.168.39.13
	I1024 19:38:52.966221   33086 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:38:52.966345   33086 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:38:52.966379   33086 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:38:52.966390   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:38:52.966404   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:38:52.966416   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:38:52.966428   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:38:52.966472   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:38:52.966499   33086 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:38:52.966508   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:38:52.966535   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:38:52.966560   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:38:52.966581   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:38:52.966619   33086 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:38:52.966643   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> /usr/share/ca-certificates/162982.pem
	I1024 19:38:52.966658   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:38:52.966674   33086 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem -> /usr/share/ca-certificates/16298.pem
	I1024 19:38:52.967054   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:38:52.990346   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:38:53.013050   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:38:53.035730   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:38:53.058361   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:38:53.080539   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:38:53.102240   33086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:38:53.124697   33086 ssh_runner.go:195] Run: openssl version
	I1024 19:38:53.130121   33086 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1024 19:38:53.130370   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:38:53.140862   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:38:53.145394   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:38:53.145608   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:38:53.145660   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:38:53.150925   33086 command_runner.go:130] > b5213941
	I1024 19:38:53.151303   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:38:53.160795   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:38:53.171175   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:38:53.175800   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:38:53.175831   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:38:53.175874   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:38:53.181951   33086 command_runner.go:130] > 51391683
	I1024 19:38:53.182397   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:38:53.192832   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:38:53.203201   33086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:38:53.208647   33086 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:38:53.208918   33086 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:38:53.208990   33086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:38:53.214194   33086 command_runner.go:130] > 3ec20f2e
	I1024 19:38:53.214513   33086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:38:53.223478   33086 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:38:53.228531   33086 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:38:53.228567   33086 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:38:53.228649   33086 ssh_runner.go:195] Run: crio config
	I1024 19:38:53.285285   33086 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:38:53.285323   33086 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:38:53.285334   33086 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:38:53.285340   33086 command_runner.go:130] > #
	I1024 19:38:53.285347   33086 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:38:53.285354   33086 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:38:53.285360   33086 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:38:53.285372   33086 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:38:53.285377   33086 command_runner.go:130] > # reload'.
	I1024 19:38:53.285395   33086 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:38:53.285415   33086 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:38:53.285426   33086 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:38:53.285437   33086 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:38:53.285445   33086 command_runner.go:130] > [crio]
	I1024 19:38:53.285460   33086 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:38:53.285469   33086 command_runner.go:130] > # containers images, in this directory.
	I1024 19:38:53.285476   33086 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1024 19:38:53.285490   33086 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:38:53.285497   33086 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1024 19:38:53.285509   33086 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:38:53.285517   33086 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:38:53.285525   33086 command_runner.go:130] > storage_driver = "overlay"
	I1024 19:38:53.285534   33086 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:38:53.285547   33086 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:38:53.285555   33086 command_runner.go:130] > storage_option = [
	I1024 19:38:53.285561   33086 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1024 19:38:53.285567   33086 command_runner.go:130] > ]
	I1024 19:38:53.285582   33086 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:38:53.285595   33086 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:38:53.285606   33086 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:38:53.285615   33086 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:38:53.285625   33086 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:38:53.285635   33086 command_runner.go:130] > # always happen on a node reboot
	I1024 19:38:53.285643   33086 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:38:53.285653   33086 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:38:53.285661   33086 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:38:53.285682   33086 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:38:53.285694   33086 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:38:53.285709   33086 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:38:53.285722   33086 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:38:53.285728   33086 command_runner.go:130] > # internal_wipe = true
	I1024 19:38:53.285738   33086 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:38:53.285749   33086 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:38:53.285761   33086 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:38:53.285770   33086 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:38:53.285789   33086 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:38:53.285797   33086 command_runner.go:130] > [crio.api]
	I1024 19:38:53.285806   33086 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:38:53.285815   33086 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:38:53.285823   33086 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:38:53.285833   33086 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:38:53.285845   33086 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:38:53.285857   33086 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:38:53.285867   33086 command_runner.go:130] > # stream_port = "0"
	I1024 19:38:53.285876   33086 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:38:53.285918   33086 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:38:53.285929   33086 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:38:53.285935   33086 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:38:53.285944   33086 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:38:53.285952   33086 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:38:53.285964   33086 command_runner.go:130] > # minutes.
	I1024 19:38:53.285977   33086 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:38:53.285985   33086 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:38:53.285998   33086 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:38:53.286006   33086 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:38:53.286013   33086 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:38:53.286024   33086 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:38:53.286032   33086 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:38:53.286042   33086 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:38:53.286052   33086 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:38:53.286059   33086 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1024 19:38:53.286067   33086 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:38:53.286077   33086 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1024 19:38:53.286105   33086 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:38:53.286116   33086 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:38:53.286122   33086 command_runner.go:130] > [crio.runtime]
	I1024 19:38:53.286134   33086 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:38:53.286144   33086 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:38:53.286150   33086 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:38:53.286158   33086 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:38:53.286167   33086 command_runner.go:130] > # default_ulimits = [
	I1024 19:38:53.286181   33086 command_runner.go:130] > # ]
	I1024 19:38:53.286194   33086 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:38:53.286204   33086 command_runner.go:130] > # no_pivot = false
	I1024 19:38:53.286211   33086 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:38:53.286224   33086 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:38:53.286234   33086 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:38:53.286246   33086 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:38:53.286255   33086 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:38:53.286267   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:38:53.286276   33086 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1024 19:38:53.286285   33086 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:38:53.286296   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:38:53.286305   33086 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:38:53.286315   33086 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:38:53.286326   33086 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:38:53.286338   33086 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:38:53.286347   33086 command_runner.go:130] > conmon_env = [
	I1024 19:38:53.286356   33086 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1024 19:38:53.286365   33086 command_runner.go:130] > ]
	I1024 19:38:53.286374   33086 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:38:53.286386   33086 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:38:53.286395   33086 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:38:53.286403   33086 command_runner.go:130] > # default_env = [
	I1024 19:38:53.286409   33086 command_runner.go:130] > # ]
	I1024 19:38:53.286421   33086 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:38:53.286428   33086 command_runner.go:130] > # selinux = false
	I1024 19:38:53.286441   33086 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:38:53.286465   33086 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:38:53.286478   33086 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:38:53.286485   33086 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:38:53.286498   33086 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:38:53.286509   33086 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:38:53.286522   33086 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:38:53.286530   33086 command_runner.go:130] > # which might increase security.
	I1024 19:38:53.286574   33086 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1024 19:38:53.286588   33086 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:38:53.286604   33086 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:38:53.286613   33086 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:38:53.286623   33086 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:38:53.286635   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:38:53.286645   33086 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:38:53.286654   33086 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:38:53.286664   33086 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:38:53.286670   33086 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:38:53.286680   33086 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:38:53.286686   33086 command_runner.go:130] > # irqbalance daemon.
	I1024 19:38:53.286694   33086 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:38:53.286708   33086 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:38:53.286716   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:38:53.286726   33086 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:38:53.286735   33086 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:38:53.286745   33086 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:38:53.286756   33086 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:38:53.286763   33086 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:38:53.286775   33086 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:38:53.286789   33086 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:38:53.286795   33086 command_runner.go:130] > # will be added.
	I1024 19:38:53.286801   33086 command_runner.go:130] > # default_capabilities = [
	I1024 19:38:53.286807   33086 command_runner.go:130] > # 	"CHOWN",
	I1024 19:38:53.286814   33086 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:38:53.286823   33086 command_runner.go:130] > # 	"FSETID",
	I1024 19:38:53.286829   33086 command_runner.go:130] > # 	"FOWNER",
	I1024 19:38:53.286839   33086 command_runner.go:130] > # 	"SETGID",
	I1024 19:38:53.286846   33086 command_runner.go:130] > # 	"SETUID",
	I1024 19:38:53.286852   33086 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:38:53.286859   33086 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:38:53.286873   33086 command_runner.go:130] > # 	"KILL",
	I1024 19:38:53.286879   33086 command_runner.go:130] > # ]
	I1024 19:38:53.286889   33086 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:38:53.286898   33086 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:38:53.286906   33086 command_runner.go:130] > # default_sysctls = [
	I1024 19:38:53.286912   33086 command_runner.go:130] > # ]
	I1024 19:38:53.286926   33086 command_runner.go:130] > # List of devices on the host that a
	I1024 19:38:53.286938   33086 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:38:53.286944   33086 command_runner.go:130] > # allowed_devices = [
	I1024 19:38:53.286951   33086 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:38:53.286959   33086 command_runner.go:130] > # ]
	I1024 19:38:53.286971   33086 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:38:53.286985   33086 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:38:53.286994   33086 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:38:53.287029   33086 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:38:53.287039   33086 command_runner.go:130] > # additional_devices = [
	I1024 19:38:53.287044   33086 command_runner.go:130] > # ]
	I1024 19:38:53.287051   33086 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:38:53.287057   33086 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:38:53.287071   33086 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:38:53.287077   33086 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:38:53.287089   33086 command_runner.go:130] > # ]
	I1024 19:38:53.287099   33086 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:38:53.287109   33086 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:38:53.287122   33086 command_runner.go:130] > # Defaults to false.
	I1024 19:38:53.287134   33086 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:38:53.287145   33086 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:38:53.287156   33086 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:38:53.287166   33086 command_runner.go:130] > # hooks_dir = [
	I1024 19:38:53.287173   33086 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:38:53.287179   33086 command_runner.go:130] > # ]
	I1024 19:38:53.287193   33086 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:38:53.287204   33086 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:38:53.287216   33086 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:38:53.287232   33086 command_runner.go:130] > #
	I1024 19:38:53.287242   33086 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:38:53.287256   33086 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:38:53.287267   33086 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:38:53.287273   33086 command_runner.go:130] > #
	I1024 19:38:53.287285   33086 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:38:53.287295   33086 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:38:53.287308   33086 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:38:53.287320   33086 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:38:53.287328   33086 command_runner.go:130] > #
	I1024 19:38:53.287334   33086 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:38:53.287348   33086 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:38:53.287359   33086 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:38:53.287371   33086 command_runner.go:130] > pids_limit = 1024
	I1024 19:38:53.287383   33086 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:38:53.287396   33086 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:38:53.287408   33086 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:38:53.287422   33086 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:38:53.287431   33086 command_runner.go:130] > # log_size_max = -1
	I1024 19:38:53.287445   33086 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:38:53.287460   33086 command_runner.go:130] > # log_to_journald = false
	I1024 19:38:53.287471   33086 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:38:53.287483   33086 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:38:53.287529   33086 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:38:53.287542   33086 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:38:53.287550   33086 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:38:53.287557   33086 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:38:53.287565   33086 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:38:53.287575   33086 command_runner.go:130] > # read_only = false
	I1024 19:38:53.287585   33086 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:38:53.287597   33086 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:38:53.287607   33086 command_runner.go:130] > # live configuration reload.
	I1024 19:38:53.287615   33086 command_runner.go:130] > # log_level = "info"
	I1024 19:38:53.287625   33086 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:38:53.287638   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:38:53.287648   33086 command_runner.go:130] > # log_filter = ""
	I1024 19:38:53.287661   33086 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:38:53.287689   33086 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:38:53.287696   33086 command_runner.go:130] > # separated by comma.
	I1024 19:38:53.287703   33086 command_runner.go:130] > # uid_mappings = ""
	I1024 19:38:53.287713   33086 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:38:53.287723   33086 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:38:53.287730   33086 command_runner.go:130] > # separated by comma.
	I1024 19:38:53.287737   33086 command_runner.go:130] > # gid_mappings = ""
	I1024 19:38:53.287748   33086 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:38:53.287758   33086 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:38:53.287768   33086 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:38:53.287775   33086 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:38:53.287784   33086 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:38:53.287794   33086 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:38:53.287807   33086 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:38:53.287814   33086 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:38:53.287826   33086 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:38:53.287835   33086 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:38:53.287847   33086 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:38:53.287854   33086 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:38:53.287864   33086 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:38:53.287875   33086 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:38:53.287886   33086 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:38:53.287894   33086 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:38:53.287905   33086 command_runner.go:130] > drop_infra_ctr = false
	I1024 19:38:53.287915   33086 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:38:53.287927   33086 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:38:53.287938   33086 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:38:53.287948   33086 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:38:53.287957   33086 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:38:53.287969   33086 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:38:53.287978   33086 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:38:53.287989   33086 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:38:53.287995   33086 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1024 19:38:53.288005   33086 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:38:53.288014   33086 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:38:53.288025   33086 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:38:53.288033   33086 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:38:53.288042   33086 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:38:53.288055   33086 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:38:53.288073   33086 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:38:53.288085   33086 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:38:53.288100   33086 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:38:53.288110   33086 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:38:53.288123   33086 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:38:53.288129   33086 command_runner.go:130] > # ]
	I1024 19:38:53.288138   33086 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:38:53.288147   33086 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:38:53.288163   33086 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:38:53.288176   33086 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:38:53.288184   33086 command_runner.go:130] > #
	I1024 19:38:53.288191   33086 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:38:53.288201   33086 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:38:53.288210   33086 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:38:53.288221   33086 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:38:53.288229   33086 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:38:53.288236   33086 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:38:53.288245   33086 command_runner.go:130] > # Where:
	I1024 19:38:53.288253   33086 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:38:53.288267   33086 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:38:53.288280   33086 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:38:53.288327   33086 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:38:53.288340   33086 command_runner.go:130] > #   in $PATH.
	I1024 19:38:53.288352   33086 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:38:53.288360   33086 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:38:53.288370   33086 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:38:53.288377   33086 command_runner.go:130] > #   state.
	I1024 19:38:53.288387   33086 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:38:53.288399   33086 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:38:53.288413   33086 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:38:53.288427   33086 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:38:53.288441   33086 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:38:53.288460   33086 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:38:53.288473   33086 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:38:53.288482   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:38:53.288495   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:38:53.288507   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:38:53.288519   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:38:53.288532   33086 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:38:53.288545   33086 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:38:53.288560   33086 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:38:53.288572   33086 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:38:53.288583   33086 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:38:53.288592   33086 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:38:53.288602   33086 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1024 19:38:53.288610   33086 command_runner.go:130] > runtime_type = "oci"
	I1024 19:38:53.288620   33086 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:38:53.288630   33086 command_runner.go:130] > runtime_config_path = ""
	I1024 19:38:53.288637   33086 command_runner.go:130] > monitor_path = ""
	I1024 19:38:53.288647   33086 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:38:53.288658   33086 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:38:53.288670   33086 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:38:53.288680   33086 command_runner.go:130] > # running containers
	I1024 19:38:53.288690   33086 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:38:53.288698   33086 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:38:53.288725   33086 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:38:53.288733   33086 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:38:53.288739   33086 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:38:53.288746   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:38:53.288751   33086 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:38:53.288759   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:38:53.288767   33086 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:38:53.288776   33086 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:38:53.288790   33086 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:38:53.288802   33086 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:38:53.288817   33086 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:38:53.288833   33086 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:38:53.288848   33086 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:38:53.288861   33086 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:38:53.288876   33086 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:38:53.288891   33086 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:38:53.288906   33086 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:38:53.288918   33086 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:38:53.288929   33086 command_runner.go:130] > # Example:
	I1024 19:38:53.288937   33086 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:38:53.288945   33086 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:38:53.288965   33086 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:38:53.288976   33086 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:38:53.288982   33086 command_runner.go:130] > # cpuset = 0
	I1024 19:38:53.288991   33086 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:38:53.288997   33086 command_runner.go:130] > # Where:
	I1024 19:38:53.289009   33086 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:38:53.289021   33086 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:38:53.289034   33086 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:38:53.289047   33086 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:38:53.289063   33086 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:38:53.289075   33086 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:38:53.289083   33086 command_runner.go:130] > # 
	I1024 19:38:53.289090   33086 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:38:53.289099   33086 command_runner.go:130] > #
	I1024 19:38:53.289143   33086 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:38:53.289157   33086 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:38:53.289170   33086 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:38:53.289179   33086 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:38:53.289192   33086 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:38:53.289201   33086 command_runner.go:130] > [crio.image]
	I1024 19:38:53.289212   33086 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:38:53.289224   33086 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:38:53.289236   33086 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:38:53.289250   33086 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:38:53.289260   33086 command_runner.go:130] > # global_auth_file = ""
	I1024 19:38:53.289272   33086 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:38:53.289279   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:38:53.289287   33086 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:38:53.289314   33086 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:38:53.289329   33086 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:38:53.289341   33086 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:38:53.289352   33086 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:38:53.289364   33086 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:38:53.289377   33086 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:38:53.289386   33086 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:38:53.289399   33086 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:38:53.289411   33086 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:38:53.289425   33086 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:38:53.289439   33086 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:38:53.289457   33086 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:38:53.289470   33086 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:38:53.289479   33086 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:38:53.289486   33086 command_runner.go:130] > # signature_policy = ""
	I1024 19:38:53.289499   33086 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:38:53.289513   33086 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:38:53.289525   33086 command_runner.go:130] > # changing them here.
	I1024 19:38:53.289536   33086 command_runner.go:130] > # insecure_registries = [
	I1024 19:38:53.289545   33086 command_runner.go:130] > # ]
	I1024 19:38:53.289559   33086 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:38:53.289570   33086 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:38:53.289578   33086 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:38:53.289587   33086 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:38:53.289598   33086 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:38:53.289613   33086 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:38:53.289624   33086 command_runner.go:130] > # CNI plugins.
	I1024 19:38:53.289633   33086 command_runner.go:130] > [crio.network]
	I1024 19:38:53.289646   33086 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:38:53.289659   33086 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:38:53.289670   33086 command_runner.go:130] > # cni_default_network = ""
	I1024 19:38:53.289679   33086 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:38:53.289689   33086 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:38:53.289702   33086 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:38:53.289713   33086 command_runner.go:130] > # plugin_dirs = [
	I1024 19:38:53.289720   33086 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:38:53.289730   33086 command_runner.go:130] > # ]
	I1024 19:38:53.289758   33086 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:38:53.289768   33086 command_runner.go:130] > [crio.metrics]
	I1024 19:38:53.289778   33086 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:38:53.289785   33086 command_runner.go:130] > enable_metrics = true
	I1024 19:38:53.289793   33086 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:38:53.289804   33086 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:38:53.289818   33086 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:38:53.289833   33086 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:38:53.289846   33086 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:38:53.289856   33086 command_runner.go:130] > # metrics_collectors = [
	I1024 19:38:53.289866   33086 command_runner.go:130] > # 	"operations",
	I1024 19:38:53.289877   33086 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:38:53.289886   33086 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:38:53.289891   33086 command_runner.go:130] > # 	"operations_errors",
	I1024 19:38:53.289898   33086 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:38:53.289909   33086 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:38:53.289917   33086 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:38:53.289928   33086 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:38:53.289938   33086 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:38:53.289946   33086 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:38:53.289957   33086 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:38:53.289966   33086 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:38:53.289975   33086 command_runner.go:130] > # 	"containers_oom",
	I1024 19:38:53.289980   33086 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:38:53.289984   33086 command_runner.go:130] > # 	"operations_total",
	I1024 19:38:53.289994   33086 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:38:53.290005   33086 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:38:53.290016   33086 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:38:53.290027   33086 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:38:53.290037   33086 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:38:53.290048   33086 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:38:53.290059   33086 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:38:53.290067   33086 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:38:53.290071   33086 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:38:53.290081   33086 command_runner.go:130] > # ]
	I1024 19:38:53.290091   33086 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:38:53.290101   33086 command_runner.go:130] > # metrics_port = 9090
	I1024 19:38:53.290110   33086 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:38:53.290121   33086 command_runner.go:130] > # metrics_socket = ""
	I1024 19:38:53.290130   33086 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:38:53.290161   33086 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:38:53.290176   33086 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:38:53.290187   33086 command_runner.go:130] > # certificate on any modification event.
	I1024 19:38:53.290199   33086 command_runner.go:130] > # metrics_cert = ""
	I1024 19:38:53.290209   33086 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:38:53.290221   33086 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:38:53.290231   33086 command_runner.go:130] > # metrics_key = ""
	I1024 19:38:53.290246   33086 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:38:53.290255   33086 command_runner.go:130] > [crio.tracing]
	I1024 19:38:53.290264   33086 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:38:53.290271   33086 command_runner.go:130] > # enable_tracing = false
	I1024 19:38:53.290280   33086 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:38:53.290292   33086 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:38:53.290301   33086 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:38:53.290312   33086 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:38:53.290323   33086 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:38:53.290333   33086 command_runner.go:130] > [crio.stats]
	I1024 19:38:53.290341   33086 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:38:53.290354   33086 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:38:53.290364   33086 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:38:53.290411   33086 command_runner.go:130] ! time="2023-10-24 19:38:53.274094349Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1024 19:38:53.290458   33086 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:38:53.290530   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:38:53.290541   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:38:53.290553   33086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:38:53.290581   33086 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-632589 NodeName:multinode-632589-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:38:53.290714   33086 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-632589-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:38:53.290783   33086 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-632589-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:38:53.290845   33086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:38:53.300044   33086 command_runner.go:130] > kubeadm
	I1024 19:38:53.300059   33086 command_runner.go:130] > kubectl
	I1024 19:38:53.300063   33086 command_runner.go:130] > kubelet
	I1024 19:38:53.300110   33086 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:38:53.300174   33086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1024 19:38:53.309045   33086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1024 19:38:53.326035   33086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:38:53.341739   33086 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I1024 19:38:53.345422   33086 command_runner.go:130] > 192.168.39.247	control-plane.minikube.internal
	I1024 19:38:53.345556   33086 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:38:53.345774   33086 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:38:53.345919   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:38:53.345967   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:38:53.360731   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I1024 19:38:53.361101   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:38:53.361618   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:38:53.361637   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:38:53.361950   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:38:53.362159   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:38:53.362294   33086 start.go:304] JoinCluster: &{Name:multinode-632589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-632589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:38:53.362420   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1024 19:38:53.362433   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:38:53.365445   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:38:53.365849   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:38:53.365882   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:38:53.365991   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:38:53.366163   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:38:53.366300   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:38:53.366434   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:38:53.546799   33086 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cg7a1w.x7luzpe9l9prf5ou --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 19:38:53.546849   33086 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1024 19:38:53.546914   33086 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:38:53.547216   33086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:38:53.547260   33086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:38:53.561428   33086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I1024 19:38:53.561858   33086 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:38:53.562302   33086 main.go:141] libmachine: Using API Version  1
	I1024 19:38:53.562325   33086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:38:53.562604   33086 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:38:53.562854   33086 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:38:53.563084   33086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-632589-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1024 19:38:53.563107   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:38:53.565755   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:38:53.566173   33086 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:38:53.566207   33086 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:38:53.566354   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:38:53.566569   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:38:53.566729   33086 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:38:53.566865   33086 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:38:53.722953   33086 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1024 19:38:53.790363   33086 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-pwmd9, kube-system/kube-proxy-vjr8q
	I1024 19:38:56.819985   33086 command_runner.go:130] > node/multinode-632589-m03 cordoned
	I1024 19:38:56.820019   33086 command_runner.go:130] > pod "busybox-5bc68d56bd-8pw8v" has DeletionTimestamp older than 1 seconds, skipping
	I1024 19:38:56.820030   33086 command_runner.go:130] > node/multinode-632589-m03 drained
	I1024 19:38:56.820058   33086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-632589-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.25694938s)
	I1024 19:38:56.820081   33086 node.go:108] successfully drained node "m03"
	I1024 19:38:56.820585   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:38:56.820904   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:38:56.821335   33086 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1024 19:38:56.821406   33086 round_trippers.go:463] DELETE https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:38:56.821420   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:56.821432   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:56.821443   33086 round_trippers.go:473]     Content-Type: application/json
	I1024 19:38:56.821454   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:56.833944   33086 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1024 19:38:56.833969   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:56.833978   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:56 GMT
	I1024 19:38:56.833989   33086 round_trippers.go:580]     Audit-Id: f8f7cf21-339e-4e9f-927f-2fc2fdc925df
	I1024 19:38:56.833996   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:56.834003   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:56.834016   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:56.834028   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:56.834035   33086 round_trippers.go:580]     Content-Length: 171
	I1024 19:38:56.834353   33086 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-632589-m03","kind":"nodes","uid":"b46ce2c5-5d6c-4894-ad88-10111966a53a"}}
	I1024 19:38:56.834412   33086 node.go:124] successfully deleted node "m03"
	I1024 19:38:56.834424   33086 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1024 19:38:56.834449   33086 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1024 19:38:56.834478   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cg7a1w.x7luzpe9l9prf5ou --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-632589-m03"
	I1024 19:38:56.922291   33086 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 19:38:57.088854   33086 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1024 19:38:57.088887   33086 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1024 19:38:57.147367   33086 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:38:57.147392   33086 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:38:57.147756   33086 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:38:57.296738   33086 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1024 19:38:57.820586   33086 command_runner.go:130] > This node has joined the cluster:
	I1024 19:38:57.820608   33086 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1024 19:38:57.820614   33086 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1024 19:38:57.820623   33086 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1024 19:38:57.824217   33086 command_runner.go:130] ! W1024 19:38:56.914359    2490 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1024 19:38:57.824236   33086 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1024 19:38:57.824243   33086 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1024 19:38:57.824250   33086 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1024 19:38:57.824273   33086 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1024 19:38:58.167594   33086 start.go:306] JoinCluster complete in 4.805293372s
	I1024 19:38:58.167620   33086 cni.go:84] Creating CNI manager for ""
	I1024 19:38:58.167628   33086 cni.go:136] 3 nodes found, recommending kindnet
	I1024 19:38:58.167681   33086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:38:58.173555   33086 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:38:58.173574   33086 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1024 19:38:58.173585   33086 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1024 19:38:58.173596   33086 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:38:58.173612   33086 command_runner.go:130] > Access: 2023-10-24 19:34:45.736816710 +0000
	I1024 19:38:58.173629   33086 command_runner.go:130] > Modify: 2023-10-16 21:25:26.000000000 +0000
	I1024 19:38:58.173634   33086 command_runner.go:130] > Change: 2023-10-24 19:34:43.720816710 +0000
	I1024 19:38:58.173638   33086 command_runner.go:130] >  Birth: -
	I1024 19:38:58.173692   33086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:38:58.173705   33086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:38:58.192684   33086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:38:58.513731   33086 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:38:58.526125   33086 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:38:58.528794   33086 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1024 19:38:58.542646   33086 command_runner.go:130] > daemonset.apps/kindnet configured
	I1024 19:38:58.548876   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:38:58.549113   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:38:58.549512   33086 round_trippers.go:463] GET https://192.168.39.247:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:38:58.549529   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.549537   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.549548   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.551645   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:58.551664   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.551673   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.551683   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.551691   33086 round_trippers.go:580]     Content-Length: 291
	I1024 19:38:58.551705   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.551717   33086 round_trippers.go:580]     Audit-Id: c4b65a6c-113e-4002-976d-78fd7dd0b7cd
	I1024 19:38:58.551728   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.551738   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.551850   33086 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d94f45ae-0601-4f22-bf81-4e1e0b9f4023","resourceVersion":"875","creationTimestamp":"2023-10-24T19:24:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1024 19:38:58.551966   33086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-632589" context rescaled to 1 replicas
	I1024 19:38:58.552001   33086 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1024 19:38:58.554831   33086 out.go:177] * Verifying Kubernetes components...
	I1024 19:38:58.556219   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:38:58.570376   33086 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:38:58.570592   33086 kapi.go:59] client config for multinode-632589: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/multinode-632589/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:38:58.570784   33086 node_ready.go:35] waiting up to 6m0s for node "multinode-632589-m03" to be "Ready" ...
	I1024 19:38:58.570836   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:38:58.570845   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.570852   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.570858   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.573016   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:58.573032   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.573039   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.573044   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.573049   33086 round_trippers.go:580]     Audit-Id: 9f73ead0-05ac-4659-afc2-2140a4b1a8ad
	I1024 19:38:58.573054   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.573059   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.573064   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.573350   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m03","uid":"65b4b7d9-4bb2-4f88-85f7-062de19e58b0","resourceVersion":"1203","creationTimestamp":"2023-10-24T19:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:38:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:38:57Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1024 19:38:58.573669   33086 node_ready.go:49] node "multinode-632589-m03" has status "Ready":"True"
	I1024 19:38:58.573690   33086 node_ready.go:38] duration metric: took 2.890232ms waiting for node "multinode-632589-m03" to be "Ready" ...
	I1024 19:38:58.573700   33086 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:38:58.573763   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I1024 19:38:58.573775   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.573785   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.573794   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.577019   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:38:58.577032   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.577038   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.577046   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.577054   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.577062   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.577078   33086 round_trippers.go:580]     Audit-Id: be161d7c-be91-4d7f-bf16-5b709ba31f40
	I1024 19:38:58.577086   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.578081   33086 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1209"},"items":[{"metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81920 chars]
	I1024 19:38:58.580346   33086 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.580407   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c5l8s
	I1024 19:38:58.580418   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.580426   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.580432   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.583009   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:58.583019   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.583025   33086 round_trippers.go:580]     Audit-Id: e9853988-cbf0-4a0d-9b4c-2f9bdbee6545
	I1024 19:38:58.583030   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.583036   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.583040   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.583049   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.583059   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.583164   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-c5l8s","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20aa782d-e6ed-45ad-b625-556d1a8503c0","resourceVersion":"856","creationTimestamp":"2023-10-24T19:25:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5e285962-4e7d-43a5-b804-501ba193af54","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e285962-4e7d-43a5-b804-501ba193af54\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1024 19:38:58.583517   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:38:58.583529   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.583536   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.583542   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.585266   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.585285   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.585309   33086 round_trippers.go:580]     Audit-Id: 614b73f6-e5c0-447e-b12b-e84d00dd362f
	I1024 19:38:58.585316   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.585324   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.585332   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.585350   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.585359   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.585633   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:38:58.585938   33086 pod_ready.go:92] pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:58.585952   33086 pod_ready.go:81] duration metric: took 5.590331ms waiting for pod "coredns-5dd5756b68-c5l8s" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.585960   33086 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.586005   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-632589
	I1024 19:38:58.586013   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.586020   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.586028   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.587943   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.587958   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.587967   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.587976   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.587986   33086 round_trippers.go:580]     Audit-Id: cd5ec8d9-9657-402e-b871-05ace38b2e8a
	I1024 19:38:58.587995   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.588008   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.588024   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.588237   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-632589","namespace":"kube-system","uid":"a84a9833-e3b8-4148-9ee7-3f4479a10186","resourceVersion":"849","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.247:2379","kubernetes.io/config.hash":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.mirror":"07959cd35b2ca084078d0fd5b7cf919c","kubernetes.io/config.seen":"2023-10-24T19:24:56.213299221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1024 19:38:58.588555   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:38:58.588569   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.588578   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.588588   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.590245   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.590261   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.590271   33086 round_trippers.go:580]     Audit-Id: 68adaff7-104a-4e16-97bd-f1ffe946d24f
	I1024 19:38:58.590280   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.590287   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.590298   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.590307   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.590314   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.590464   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:38:58.590721   33086 pod_ready.go:92] pod "etcd-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:58.590733   33086 pod_ready.go:81] duration metric: took 4.767214ms waiting for pod "etcd-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.590746   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.590782   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-632589
	I1024 19:38:58.590789   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.590795   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.590802   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.592250   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.592263   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.592272   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.592280   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.592292   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.592308   33086 round_trippers.go:580]     Audit-Id: 4adb38e5-474f-4ab3-acf2-2877f328d03d
	I1024 19:38:58.592316   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.592328   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.592527   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-632589","namespace":"kube-system","uid":"34fcbf72-bf92-477f-8c1c-b0fd908c561d","resourceVersion":"868","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.247:8443","kubernetes.io/config.hash":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.mirror":"3765446b9543fe4146506d2b0cf0aafd","kubernetes.io/config.seen":"2023-10-24T19:24:56.213304140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1024 19:38:58.592946   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:38:58.592961   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.592968   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.592976   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.594872   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.594883   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.594888   33086 round_trippers.go:580]     Audit-Id: 9e7ffdba-b511-4149-aa76-65d43d885813
	I1024 19:38:58.594893   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.594898   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.594903   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.594908   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.594916   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.595160   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:38:58.595389   33086 pod_ready.go:92] pod "kube-apiserver-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:58.595399   33086 pod_ready.go:81] duration metric: took 4.648158ms waiting for pod "kube-apiserver-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.595406   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.595437   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-632589
	I1024 19:38:58.595444   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.595450   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.595457   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.597106   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.597122   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.597128   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.597134   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.597139   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.597144   33086 round_trippers.go:580]     Audit-Id: 2df8a01a-ae6c-4a06-9d3c-267977197f5c
	I1024 19:38:58.597149   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.597153   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.597394   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-632589","namespace":"kube-system","uid":"6eb03208-9b7f-4b5d-a7cf-03dd9c7948e6","resourceVersion":"850","creationTimestamp":"2023-10-24T19:24:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.mirror":"9a4a5ca64f08e8d78cd58402e3f15810","kubernetes.io/config.seen":"2023-10-24T19:24:47.530352200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1024 19:38:58.597773   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:38:58.597786   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.597795   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.597805   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.599444   33086 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:38:58.599460   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.599469   33086 round_trippers.go:580]     Audit-Id: 07791208-f961-49a1-a9e1-6957b734bdf2
	I1024 19:38:58.599477   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.599492   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.599501   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.599507   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.599519   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.599867   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:38:58.600151   33086 pod_ready.go:92] pod "kube-controller-manager-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:58.600164   33086 pod_ready.go:81] duration metric: took 4.75147ms waiting for pod "kube-controller-manager-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.600171   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.771500   33086 request.go:629] Waited for 171.279587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:38:58.771576   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6vn7s
	I1024 19:38:58.771584   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.771595   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.771610   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.774532   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:58.774553   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.774561   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.774567   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.774572   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.774578   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.774584   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.774589   33086 round_trippers.go:580]     Audit-Id: ce0671b3-f490-43b2-a252-3c619cbe8107
	I1024 19:38:58.774768   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6vn7s","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6b9189d-1bbe-4de8-a0d8-4ea43b55a45b","resourceVersion":"1030","creationTimestamp":"2023-10-24T19:25:51Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1024 19:38:58.971564   33086 request.go:629] Waited for 196.371697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:38:58.971640   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m02
	I1024 19:38:58.971648   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:58.971659   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:58.971668   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:58.974945   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:38:58.974965   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:58.974972   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:58.974977   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:58.974982   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:58.974987   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:58.974992   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:58 GMT
	I1024 19:38:58.975000   33086 round_trippers.go:580]     Audit-Id: bcf125c2-e0df-4b23-8d2b-232ea32b71c0
	I1024 19:38:58.975247   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m02","uid":"f34f53a3-bdef-415c-99af-e8304feacde1","resourceVersion":"1015","creationTimestamp":"2023-10-24T19:37:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:37:16Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1024 19:38:58.975475   33086 pod_ready.go:92] pod "kube-proxy-6vn7s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:58.975487   33086 pod_ready.go:81] duration metric: took 375.31026ms waiting for pod "kube-proxy-6vn7s" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:58.975496   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:59.171912   33086 request.go:629] Waited for 196.352221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:38:59.171985   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd49s
	I1024 19:38:59.171991   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:59.171998   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:59.172005   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:59.175008   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:59.175029   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:59.175036   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:59.175041   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:59.175047   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:59.175052   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:59 GMT
	I1024 19:38:59.175057   33086 round_trippers.go:580]     Audit-Id: b5f54bb8-49d9-46d7-b30d-43fb8bc048de
	I1024 19:38:59.175062   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:59.175320   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gd49s","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1c573fd-3f4b-4d90-a366-6d859a121185","resourceVersion":"834","creationTimestamp":"2023-10-24T19:25:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:38:59.370984   33086 request.go:629] Waited for 195.302834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:38:59.371056   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:38:59.371063   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:59.371072   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:59.371080   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:59.374371   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:38:59.374393   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:59.374402   33086 round_trippers.go:580]     Audit-Id: 3d9a381c-90f1-4a44-b920-770dab41adc6
	I1024 19:38:59.374407   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:59.374412   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:59.374417   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:59.374422   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:59.374427   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:59 GMT
	I1024 19:38:59.374556   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:38:59.374955   33086 pod_ready.go:92] pod "kube-proxy-gd49s" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:59.374983   33086 pod_ready.go:81] duration metric: took 399.469449ms waiting for pod "kube-proxy-gd49s" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:59.374995   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:59.571460   33086 request.go:629] Waited for 196.397624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:38:59.571520   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjr8q
	I1024 19:38:59.571527   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:59.571551   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:59.571561   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:59.574407   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:59.574428   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:59.574438   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:59.574444   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:59.574449   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:59.574458   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:59.574463   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:59 GMT
	I1024 19:38:59.574478   33086 round_trippers.go:580]     Audit-Id: 3457ea72-03af-4ca7-8160-853e37d8717e
	I1024 19:38:59.575015   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vjr8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"844852b2-3dbb-4d52-a752-b39021adfc04","resourceVersion":"1179","creationTimestamp":"2023-10-24T19:26:43Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a862f46-5df7-4d87-a017-9a979400bf2c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:26:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a862f46-5df7-4d87-a017-9a979400bf2c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5726 chars]
	I1024 19:38:59.771791   33086 request.go:629] Waited for 196.353831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:38:59.771865   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589-m03
	I1024 19:38:59.771871   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:59.771883   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:59.771899   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:59.774616   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:38:59.774637   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:59.774647   33086 round_trippers.go:580]     Audit-Id: 9b33ca60-3e45-4ab5-b3b0-efa4f9e6b9d3
	I1024 19:38:59.774659   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:59.774672   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:59.774680   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:59.774689   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:59.774698   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:59 GMT
	I1024 19:38:59.774910   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589-m03","uid":"65b4b7d9-4bb2-4f88-85f7-062de19e58b0","resourceVersion":"1203","creationTimestamp":"2023-10-24T19:38:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:38:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:38:57Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1024 19:38:59.775248   33086 pod_ready.go:92] pod "kube-proxy-vjr8q" in "kube-system" namespace has status "Ready":"True"
	I1024 19:38:59.775266   33086 pod_ready.go:81] duration metric: took 400.258059ms waiting for pod "kube-proxy-vjr8q" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:59.775277   33086 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:38:59.971844   33086 request.go:629] Waited for 196.495923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:38:59.971922   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-632589
	I1024 19:38:59.971930   33086 round_trippers.go:469] Request Headers:
	I1024 19:38:59.971957   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:38:59.971971   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:38:59.975631   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:38:59.975651   33086 round_trippers.go:577] Response Headers:
	I1024 19:38:59.975659   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:38:59 GMT
	I1024 19:38:59.975668   33086 round_trippers.go:580]     Audit-Id: a8b58f84-434b-4834-9aef-59b3e25440ab
	I1024 19:38:59.975674   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:38:59.975685   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:38:59.975695   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:38:59.975704   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:38:59.976012   33086 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-632589","namespace":"kube-system","uid":"e85a7c19-1a25-42f5-81bd-16ed7070ca3c","resourceVersion":"857","creationTimestamp":"2023-10-24T19:24:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.mirror":"83154ed970e6208e036ff8de26a58e6d","kubernetes.io/config.seen":"2023-10-24T19:24:56.213306721Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:24:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1024 19:39:00.171277   33086 request.go:629] Waited for 194.898596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:39:00.171331   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes/multinode-632589
	I1024 19:39:00.171336   33086 round_trippers.go:469] Request Headers:
	I1024 19:39:00.171344   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:39:00.171350   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:39:00.173976   33086 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:39:00.174004   33086 round_trippers.go:577] Response Headers:
	I1024 19:39:00.174014   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:39:00.174023   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:39:00 GMT
	I1024 19:39:00.174031   33086 round_trippers.go:580]     Audit-Id: 76a284fd-5fb6-4e66-8d2a-303118f9881f
	I1024 19:39:00.174040   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:39:00.174049   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:39:00.174056   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:39:00.174278   33086 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:24:52Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1024 19:39:00.174678   33086 pod_ready.go:92] pod "kube-scheduler-multinode-632589" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:00.174701   33086 pod_ready.go:81] duration metric: took 399.414235ms waiting for pod "kube-scheduler-multinode-632589" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:00.174712   33086 pod_ready.go:38] duration metric: took 1.601001532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:39:00.174726   33086 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:39:00.174775   33086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:39:00.187578   33086 system_svc.go:56] duration metric: took 12.845071ms WaitForService to wait for kubelet.
	I1024 19:39:00.187600   33086 kubeadm.go:581] duration metric: took 1.635570318s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:39:00.187617   33086 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:39:00.370969   33086 request.go:629] Waited for 183.286705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes
	I1024 19:39:00.371023   33086 round_trippers.go:463] GET https://192.168.39.247:8443/api/v1/nodes
	I1024 19:39:00.371027   33086 round_trippers.go:469] Request Headers:
	I1024 19:39:00.371036   33086 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:39:00.371042   33086 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:39:00.374332   33086 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:39:00.374355   33086 round_trippers.go:577] Response Headers:
	I1024 19:39:00.374364   33086 round_trippers.go:580]     Audit-Id: a99e373e-d500-40bc-9c7e-62d5ee92eb8f
	I1024 19:39:00.374371   33086 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:39:00.374384   33086 round_trippers.go:580]     Content-Type: application/json
	I1024 19:39:00.374396   33086 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cce9af5a-064b-4f12-b7ac-21b73ffe8345
	I1024 19:39:00.374407   33086 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e0110be6-a39e-42ea-97d7-c3d80d6bbfc9
	I1024 19:39:00.374415   33086 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:39:00 GMT
	I1024 19:39:00.374636   33086 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"multinode-632589","uid":"15774758-2227-4b50-b188-fee137ad951e","resourceVersion":"886","creationTimestamp":"2023-10-24T19:24:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-632589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-632589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_24_57_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15134 chars]
	I1024 19:39:00.375233   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:39:00.375255   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:39:00.375271   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:39:00.375282   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:39:00.375290   33086 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:39:00.375294   33086 node_conditions.go:123] node cpu capacity is 2
	I1024 19:39:00.375301   33086 node_conditions.go:105] duration metric: took 187.67975ms to run NodePressure ...
	I1024 19:39:00.375311   33086 start.go:228] waiting for startup goroutines ...
	I1024 19:39:00.375330   33086 start.go:242] writing updated cluster config ...
	I1024 19:39:00.375617   33086 ssh_runner.go:195] Run: rm -f paused
	I1024 19:39:00.422938   33086 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:39:00.425762   33086 out.go:177] * Done! kubectl is now configured to use "multinode-632589" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:34:44 UTC, ends at Tue 2023-10-24 19:39:01 UTC. --
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.479173775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698176341479159369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bf91fc29-23a1-48c2-8f33-a3e821e31df6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.479602800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=84542482-826a-4427-80f5-47a046eb0cc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.479676813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=84542482-826a-4427-80f5-47a046eb0cc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.483812451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947c5f9b4602104cf5a240068dcd24e21bffee60cb2499a5bdf4af69af140898,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698176154042432531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1132460353a6d072271f052ce63624892a4bbc8994e99870f6c4b82d2d86237,PodSandboxId:29ae5af5c83431d5b47e885bf2df578e78089e564eea19a8093f5dd4a7eb5b7a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698176131357433279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd395810016fe37f63ed8fb068f3dd460388a598c58dc272763971d6d4f3aab,PodSandboxId:d8de3315d755f2a43387e7bc20b62661bce5d0a649bf02c5da71cb85bc589fa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698176130373337205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276fbd0e0a884fa731c25bd55e53bd19941d1903ac92ddb59e88dd7878585148,PodSandboxId:a85c05b5b7b79147f1b1b56efe575f17ca265655720d02175d24dc6b38e00f57,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698176125395625898,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d8e53dc9792a4c8dad7b907e6c86fa016284973ea2af19ebb12ca79c9f55f2,PodSandboxId:f7e1cd24af5e5b12ca8a910a32d45750dcb6a143b6c9df46a3b22b7ed4893731,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698176123006681851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e88326d6e44f52f6843aa5c8878ee40d2de2e0868920fb7752f87a8a4efc75,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698176122892550664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346
dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01edc336668941c5bd3d7ad7133c47adbf26947f9882e0bc6b2c685ea317e335,PodSandboxId:2acda42b44320489c423813552b09e8c5c840604aec9fcb5962035189b1782f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698176116553027082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa96bd05251b211aeb5a3799d0fb58179e65a9afeae1548bc956a03d026fa58,PodSandboxId:dfed823eea6b86b89fea93a327bb4f5e16dae9abb1a7d80028892e208704a78b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698176116410768432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.has
h: c1b3fcc1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da93c6387a8e8a61fe8d826f6c58759de8ff52c151360f09011d3e44bbf6b88,PodSandboxId:06d1784c735a8c3607eac912e97b00abb3852766ca66e29471d3f997eada71ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698176115940518449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ef2df7b507e9274fe3a3337d4245e021ec88c201c33f3133acbf1f33230c05,PodSandboxId:476ec4ff28ee768fefb05d8a88d0321ace049452785c294e628f7797be68f88b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698176115812623549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2e7911d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=84542482-826a-4427-80f5-47a046eb0cc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.527493334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=65522878-9656-497e-b9a0-6c3612fb21a1 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.527584335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=65522878-9656-497e-b9a0-6c3612fb21a1 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.529116835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a5dab84f-f68d-4add-b071-ffea7e797a1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.529494714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698176341529482225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a5dab84f-f68d-4add-b071-ffea7e797a1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.530389577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cac3952-3cab-404d-8ada-e3f8dce9b88e name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.530468216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cac3952-3cab-404d-8ada-e3f8dce9b88e name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.530706570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947c5f9b4602104cf5a240068dcd24e21bffee60cb2499a5bdf4af69af140898,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698176154042432531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1132460353a6d072271f052ce63624892a4bbc8994e99870f6c4b82d2d86237,PodSandboxId:29ae5af5c83431d5b47e885bf2df578e78089e564eea19a8093f5dd4a7eb5b7a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698176131357433279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd395810016fe37f63ed8fb068f3dd460388a598c58dc272763971d6d4f3aab,PodSandboxId:d8de3315d755f2a43387e7bc20b62661bce5d0a649bf02c5da71cb85bc589fa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698176130373337205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276fbd0e0a884fa731c25bd55e53bd19941d1903ac92ddb59e88dd7878585148,PodSandboxId:a85c05b5b7b79147f1b1b56efe575f17ca265655720d02175d24dc6b38e00f57,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698176125395625898,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d8e53dc9792a4c8dad7b907e6c86fa016284973ea2af19ebb12ca79c9f55f2,PodSandboxId:f7e1cd24af5e5b12ca8a910a32d45750dcb6a143b6c9df46a3b22b7ed4893731,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698176123006681851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e88326d6e44f52f6843aa5c8878ee40d2de2e0868920fb7752f87a8a4efc75,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698176122892550664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346
dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01edc336668941c5bd3d7ad7133c47adbf26947f9882e0bc6b2c685ea317e335,PodSandboxId:2acda42b44320489c423813552b09e8c5c840604aec9fcb5962035189b1782f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698176116553027082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa96bd05251b211aeb5a3799d0fb58179e65a9afeae1548bc956a03d026fa58,PodSandboxId:dfed823eea6b86b89fea93a327bb4f5e16dae9abb1a7d80028892e208704a78b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698176116410768432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.has
h: c1b3fcc1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da93c6387a8e8a61fe8d826f6c58759de8ff52c151360f09011d3e44bbf6b88,PodSandboxId:06d1784c735a8c3607eac912e97b00abb3852766ca66e29471d3f997eada71ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698176115940518449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ef2df7b507e9274fe3a3337d4245e021ec88c201c33f3133acbf1f33230c05,PodSandboxId:476ec4ff28ee768fefb05d8a88d0321ace049452785c294e628f7797be68f88b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698176115812623549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2e7911d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cac3952-3cab-404d-8ada-e3f8dce9b88e name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.570910607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e04f3916-35bf-4c50-b582-f711fd9fe12a name=/runtime.v1.RuntimeService/Version
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.570991090Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e04f3916-35bf-4c50-b582-f711fd9fe12a name=/runtime.v1.RuntimeService/Version
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.572396625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4de2a1a-5423-4bfa-85d2-62b8393f63ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.572942200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698176341572916949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d4de2a1a-5423-4bfa-85d2-62b8393f63ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.573560261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9010635-b072-42b2-b722-7ac496b5d46c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.573649286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9010635-b072-42b2-b722-7ac496b5d46c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.573921398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947c5f9b4602104cf5a240068dcd24e21bffee60cb2499a5bdf4af69af140898,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698176154042432531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1132460353a6d072271f052ce63624892a4bbc8994e99870f6c4b82d2d86237,PodSandboxId:29ae5af5c83431d5b47e885bf2df578e78089e564eea19a8093f5dd4a7eb5b7a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698176131357433279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd395810016fe37f63ed8fb068f3dd460388a598c58dc272763971d6d4f3aab,PodSandboxId:d8de3315d755f2a43387e7bc20b62661bce5d0a649bf02c5da71cb85bc589fa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698176130373337205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276fbd0e0a884fa731c25bd55e53bd19941d1903ac92ddb59e88dd7878585148,PodSandboxId:a85c05b5b7b79147f1b1b56efe575f17ca265655720d02175d24dc6b38e00f57,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698176125395625898,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d8e53dc9792a4c8dad7b907e6c86fa016284973ea2af19ebb12ca79c9f55f2,PodSandboxId:f7e1cd24af5e5b12ca8a910a32d45750dcb6a143b6c9df46a3b22b7ed4893731,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698176123006681851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e88326d6e44f52f6843aa5c8878ee40d2de2e0868920fb7752f87a8a4efc75,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698176122892550664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346
dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01edc336668941c5bd3d7ad7133c47adbf26947f9882e0bc6b2c685ea317e335,PodSandboxId:2acda42b44320489c423813552b09e8c5c840604aec9fcb5962035189b1782f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698176116553027082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa96bd05251b211aeb5a3799d0fb58179e65a9afeae1548bc956a03d026fa58,PodSandboxId:dfed823eea6b86b89fea93a327bb4f5e16dae9abb1a7d80028892e208704a78b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698176116410768432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.has
h: c1b3fcc1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da93c6387a8e8a61fe8d826f6c58759de8ff52c151360f09011d3e44bbf6b88,PodSandboxId:06d1784c735a8c3607eac912e97b00abb3852766ca66e29471d3f997eada71ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698176115940518449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ef2df7b507e9274fe3a3337d4245e021ec88c201c33f3133acbf1f33230c05,PodSandboxId:476ec4ff28ee768fefb05d8a88d0321ace049452785c294e628f7797be68f88b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698176115812623549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2e7911d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9010635-b072-42b2-b722-7ac496b5d46c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.612042200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9eaa1c99-e3cf-4681-88ca-3698016a23ce name=/runtime.v1.RuntimeService/Version
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.612096100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9eaa1c99-e3cf-4681-88ca-3698016a23ce name=/runtime.v1.RuntimeService/Version
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.614513402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f9515391-bbd3-45cc-830e-191be498f6f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.614944229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698176341614927170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f9515391-bbd3-45cc-830e-191be498f6f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.615810103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1014d2d5-4865-4605-bfe8-536d7c83e00c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.615958769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1014d2d5-4865-4605-bfe8-536d7c83e00c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:39:01 multinode-632589 crio[714]: time="2023-10-24 19:39:01.616176146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:947c5f9b4602104cf5a240068dcd24e21bffee60cb2499a5bdf4af69af140898,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698176154042432531,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1132460353a6d072271f052ce63624892a4bbc8994e99870f6c4b82d2d86237,PodSandboxId:29ae5af5c83431d5b47e885bf2df578e78089e564eea19a8093f5dd4a7eb5b7a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1698176131357433279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ddcjz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b81ca1e-f022-4358-be2c-27042e8503c1,},Annotations:map[string]string{io.kubernetes.container.hash: a90270f3,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd395810016fe37f63ed8fb068f3dd460388a598c58dc272763971d6d4f3aab,PodSandboxId:d8de3315d755f2a43387e7bc20b62661bce5d0a649bf02c5da71cb85bc589fa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698176130373337205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c5l8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aa782d-e6ed-45ad-b625-556d1a8503c0,},Annotations:map[string]string{io.kubernetes.container.hash: a119ceb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276fbd0e0a884fa731c25bd55e53bd19941d1903ac92ddb59e88dd7878585148,PodSandboxId:a85c05b5b7b79147f1b1b56efe575f17ca265655720d02175d24dc6b38e00f57,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1698176125395625898,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xh444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfd9e8e0-4e6e-43ab-b7a8-3fcd4cf7895b,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7a21bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d8e53dc9792a4c8dad7b907e6c86fa016284973ea2af19ebb12ca79c9f55f2,PodSandboxId:f7e1cd24af5e5b12ca8a910a32d45750dcb6a143b6c9df46a3b22b7ed4893731,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698176123006681851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd49s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c573fd-3f4b-4d90-a366-6d859a
121185,},Annotations:map[string]string{io.kubernetes.container.hash: bd7cfcae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e88326d6e44f52f6843aa5c8878ee40d2de2e0868920fb7752f87a8a4efc75,PodSandboxId:cfdf74a596a8bbc51403cfd9a90f56daa34e1aeeb711457196b6b2ec4b721d11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698176122892550664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4023756b-6e38-476d-8dec-90ea2346
dc01,},Annotations:map[string]string{io.kubernetes.container.hash: cc9a5e5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01edc336668941c5bd3d7ad7133c47adbf26947f9882e0bc6b2c685ea317e335,PodSandboxId:2acda42b44320489c423813552b09e8c5c840604aec9fcb5962035189b1782f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698176116553027082,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83154ed970e6208e036ff8de26a58e6d,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa96bd05251b211aeb5a3799d0fb58179e65a9afeae1548bc956a03d026fa58,PodSandboxId:dfed823eea6b86b89fea93a327bb4f5e16dae9abb1a7d80028892e208704a78b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698176116410768432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07959cd35b2ca084078d0fd5b7cf919c,},Annotations:map[string]string{io.kubernetes.container.has
h: c1b3fcc1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da93c6387a8e8a61fe8d826f6c58759de8ff52c151360f09011d3e44bbf6b88,PodSandboxId:06d1784c735a8c3607eac912e97b00abb3852766ca66e29471d3f997eada71ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698176115940518449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4a5ca64f08e8d78cd58402e3f15810,},Annotations:map[string]string{io.
kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ef2df7b507e9274fe3a3337d4245e021ec88c201c33f3133acbf1f33230c05,PodSandboxId:476ec4ff28ee768fefb05d8a88d0321ace049452785c294e628f7797be68f88b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698176115812623549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-632589,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3765446b9543fe4146506d2b0cf0aafd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2e7911d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1014d2d5-4865-4605-bfe8-536d7c83e00c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	947c5f9b46021       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   cfdf74a596a8b       storage-provisioner
	c1132460353a6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   29ae5af5c8343       busybox-5bc68d56bd-ddcjz
	2bd395810016f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   d8de3315d755f       coredns-5dd5756b68-c5l8s
	276fbd0e0a884       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   a85c05b5b7b79       kindnet-xh444
	57d8e53dc9792       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      3 minutes ago       Running             kube-proxy                1                   f7e1cd24af5e5       kube-proxy-gd49s
	49e88326d6e44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   cfdf74a596a8b       storage-provisioner
	01edc33666894       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      3 minutes ago       Running             kube-scheduler            1                   2acda42b44320       kube-scheduler-multinode-632589
	baa96bd05251b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   dfed823eea6b8       etcd-multinode-632589
	0da93c6387a8e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      3 minutes ago       Running             kube-controller-manager   1                   06d1784c735a8       kube-controller-manager-multinode-632589
	a1ef2df7b507e       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      3 minutes ago       Running             kube-apiserver            1                   476ec4ff28ee7       kube-apiserver-multinode-632589
	
	* 
	* ==> coredns [2bd395810016fe37f63ed8fb068f3dd460388a598c58dc272763971d6d4f3aab] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57337 - 34570 "HINFO IN 1454703970922927685.482235316797207562. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01077273s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-632589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-632589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=multinode-632589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_24_57_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:24:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-632589
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:38:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:35:52 +0000   Tue, 24 Oct 2023 19:24:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:35:52 +0000   Tue, 24 Oct 2023 19:24:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:35:52 +0000   Tue, 24 Oct 2023 19:24:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:35:52 +0000   Tue, 24 Oct 2023 19:35:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    multinode-632589
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7a2a529e06345baafa6e4c8e4cddc27
	  System UUID:                c7a2a529-e063-45ba-afa6-e4c8e4cddc27
	  Boot ID:                    a12e53d8-023b-476e-9b76-d4eed26eda89
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ddcjz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-c5l8s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-632589                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-xh444                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-632589             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-632589    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-gd49s                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-632589             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-632589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-632589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-632589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-632589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-632589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-632589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-632589 event: Registered Node multinode-632589 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-632589 status is now: NodeReady
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s (x8 over 3m47s)  kubelet          Node multinode-632589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x8 over 3m47s)  kubelet          Node multinode-632589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x7 over 3m47s)  kubelet          Node multinode-632589 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node multinode-632589 event: Registered Node multinode-632589 in Controller
	
	
	Name:               multinode-632589-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-632589-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:37:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-632589-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:38:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:37:16 +0000   Tue, 24 Oct 2023 19:37:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:37:16 +0000   Tue, 24 Oct 2023 19:37:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:37:16 +0000   Tue, 24 Oct 2023 19:37:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:37:16 +0000   Tue, 24 Oct 2023 19:37:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    multinode-632589-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 181b01b743a5434eb054a2d660635f48
	  System UUID:                181b01b7-43a5-434e-b054-a2d660635f48
	  Boot ID:                    6dad4c62-d4a1-4d68-aeb0-c7048487d91a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-d2p4q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-qvkwv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-6vn7s            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 103s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-632589-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-632589-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-632589-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-632589-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m54s                  kubelet     Node multinode-632589-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m11s (x2 over 3m11s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 105s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)    kubelet     Node multinode-632589-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)    kubelet     Node multinode-632589-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)    kubelet     Node multinode-632589-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                   kubelet     Node multinode-632589-m02 status is now: NodeReady
	
	
	Name:               multinode-632589-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-632589-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:38:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-632589-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:38:57 +0000   Tue, 24 Oct 2023 19:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:38:57 +0000   Tue, 24 Oct 2023 19:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:38:57 +0000   Tue, 24 Oct 2023 19:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:38:57 +0000   Tue, 24 Oct 2023 19:38:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    multinode-632589-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 acd94d864589487692804afca33b498c
	  System UUID:                acd94d86-4589-4876-9280-4afca33b498c
	  Boot ID:                    338363f5-36d0-4d25-a2e2-515e9af5333c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8pw8v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-pwmd9               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-vjr8q            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 5s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-632589-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-632589-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-632589-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-632589-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                11m                kubelet     Node multinode-632589-m03 status is now: NodeReady
	  Normal   NodeNotReady             63s                kubelet     Node multinode-632589-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        38s (x2 over 98s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientMemory  5s (x3 over 11m)   kubelet     Node multinode-632589-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x3 over 11m)   kubelet     Node multinode-632589-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x3 over 11m)   kubelet     Node multinode-632589-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-632589-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-632589-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-632589-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-632589-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct24 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068598] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.330059] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.264019] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.161688] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000008] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.475957] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.645960] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.107579] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.135338] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.102505] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.196302] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[Oct24 19:35] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [baa96bd05251b211aeb5a3799d0fb58179e65a9afeae1548bc956a03d026fa58] <==
	* {"level":"info","ts":"2023-10-24T19:35:18.065985Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T19:35:18.066014Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T19:35:18.066031Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T19:35:18.066286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 switched to configuration voters=(13118041866946430825)"}
	{"level":"info","ts":"2023-10-24T19:35:18.066323Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fda2fc0436a8884","local-member-id":"b60ca5935c0b4769","added-peer-id":"b60ca5935c0b4769","added-peer-peer-urls":["https://192.168.39.247:2380"]}
	{"level":"info","ts":"2023-10-24T19:35:18.066389Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fda2fc0436a8884","local-member-id":"b60ca5935c0b4769","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:35:18.066408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:35:19.743045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T19:35:19.743166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T19:35:19.743212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgPreVoteResp from b60ca5935c0b4769 at term 2"}
	{"level":"info","ts":"2023-10-24T19:35:19.743254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:19.743279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgVoteResp from b60ca5935c0b4769 at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:19.743305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became leader at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:19.74333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b60ca5935c0b4769 elected leader b60ca5935c0b4769 at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:19.745979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b60ca5935c0b4769","local-member-attributes":"{Name:multinode-632589 ClientURLs:[https://192.168.39.247:2379]}","request-path":"/0/members/b60ca5935c0b4769/attributes","cluster-id":"7fda2fc0436a8884","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:35:19.746034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:19.746388Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:19.746443Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:19.746002Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:19.747342Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.247:2379"}
	{"level":"info","ts":"2023-10-24T19:35:19.747548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:35:24.070358Z","caller":"traceutil/trace.go:171","msg":"trace[1985709745] linearizableReadLoop","detail":"{readStateIndex:887; appliedIndex:886; }","duration":"162.736808ms","start":"2023-10-24T19:35:23.907604Z","end":"2023-10-24T19:35:24.070341Z","steps":["trace[1985709745] 'read index received'  (duration: 158.964998ms)","trace[1985709745] 'applied index is now lower than readState.Index'  (duration: 3.771321ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:35:24.070471Z","caller":"traceutil/trace.go:171","msg":"trace[229795180] transaction","detail":"{read_only:false; response_revision:832; number_of_response:1; }","duration":"209.452243ms","start":"2023-10-24T19:35:23.861011Z","end":"2023-10-24T19:35:24.070464Z","steps":["trace[229795180] 'process raft request'  (duration: 205.603376ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:35:24.070786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.195718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3973"}
	{"level":"info","ts":"2023-10-24T19:35:24.070967Z","caller":"traceutil/trace.go:171","msg":"trace[2007439659] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:832; }","duration":"163.387761ms","start":"2023-10-24T19:35:23.90757Z","end":"2023-10-24T19:35:24.070958Z","steps":["trace[2007439659] 'agreement among raft nodes before linearized reading'  (duration: 163.144086ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:39:02 up 4 min,  0 users,  load average: 0.56, 0.30, 0.13
	Linux multinode-632589 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [276fbd0e0a884fa731c25bd55e53bd19941d1903ac92ddb59e88dd7878585148] <==
	* I1024 19:38:17.105281       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	I1024 19:38:17.105446       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1024 19:38:17.105468       1 main.go:250] Node multinode-632589-m03 has CIDR [10.244.3.0/24] 
	I1024 19:38:27.124160       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:38:27.124272       1 main.go:227] handling current node
	I1024 19:38:27.124297       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I1024 19:38:27.124315       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	I1024 19:38:27.124479       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1024 19:38:27.124504       1 main.go:250] Node multinode-632589-m03 has CIDR [10.244.3.0/24] 
	I1024 19:38:37.129461       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:38:37.129669       1 main.go:227] handling current node
	I1024 19:38:37.129708       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I1024 19:38:37.129744       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	I1024 19:38:37.129945       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1024 19:38:37.129981       1 main.go:250] Node multinode-632589-m03 has CIDR [10.244.3.0/24] 
	I1024 19:38:47.137507       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:38:47.137563       1 main.go:227] handling current node
	I1024 19:38:47.137582       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I1024 19:38:47.137589       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	I1024 19:38:47.137715       1 main.go:223] Handling node with IPs: map[192.168.39.13:{}]
	I1024 19:38:47.137751       1 main.go:250] Node multinode-632589-m03 has CIDR [10.244.3.0/24] 
	I1024 19:38:57.147039       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I1024 19:38:57.147089       1 main.go:227] handling current node
	I1024 19:38:57.147112       1 main.go:223] Handling node with IPs: map[192.168.39.186:{}]
	I1024 19:38:57.147118       1 main.go:250] Node multinode-632589-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [a1ef2df7b507e9274fe3a3337d4245e021ec88c201c33f3133acbf1f33230c05] <==
	* I1024 19:35:21.191415       1 establishing_controller.go:76] Starting EstablishingController
	I1024 19:35:21.191444       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1024 19:35:21.191470       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1024 19:35:21.191504       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1024 19:35:21.247268       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:35:21.282786       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 19:35:21.310897       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 19:35:21.311148       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 19:35:21.311192       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1024 19:35:21.311199       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1024 19:35:21.311267       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:35:21.317083       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 19:35:21.317151       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:35:21.317701       1 aggregator.go:166] initial CRD sync complete...
	I1024 19:35:21.317741       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 19:35:21.317747       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 19:35:21.317752       1 cache.go:39] Caches are synced for autoregister controller
	E1024 19:35:21.320631       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1024 19:35:22.115120       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:35:24.099378       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 19:35:24.272093       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 19:35:24.288734       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 19:35:24.365609       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:35:24.373148       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1024 19:36:11.586611       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [0da93c6387a8e8a61fe8d826f6c58759de8ff52c151360f09011d3e44bbf6b88] <==
	* I1024 19:37:16.280156       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m03"
	I1024 19:37:16.280288       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-632589-m02\" does not exist"
	I1024 19:37:16.280659       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wrmmm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wrmmm"
	I1024 19:37:16.299697       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-632589-m02" podCIDRs=["10.244.1.0/24"]
	I1024 19:37:16.630411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m02"
	I1024 19:37:17.166728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.078µs"
	I1024 19:37:30.449004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="161.065µs"
	I1024 19:37:31.030238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="110.985µs"
	I1024 19:37:31.034027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="93.651µs"
	I1024 19:37:58.671561       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m02"
	I1024 19:38:53.816788       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-d2p4q"
	I1024 19:38:53.828388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.368034ms"
	I1024 19:38:53.854067       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.591453ms"
	I1024 19:38:53.871907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.691823ms"
	I1024 19:38:53.872009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.898µs"
	I1024 19:38:55.314156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.899053ms"
	I1024 19:38:55.314682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="234.884µs"
	I1024 19:38:56.140917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="84.719µs"
	I1024 19:38:56.831042       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m02"
	I1024 19:38:57.511455       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-632589-m03\" does not exist"
	I1024 19:38:57.512400       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m02"
	I1024 19:38:57.512593       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-8pw8v" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-8pw8v"
	I1024 19:38:57.534809       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-632589-m03" podCIDRs=["10.244.2.0/24"]
	I1024 19:38:57.551952       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-632589-m03"
	I1024 19:38:58.414076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="229.579µs"
	
	* 
	* ==> kube-proxy [57d8e53dc9792a4c8dad7b907e6c86fa016284973ea2af19ebb12ca79c9f55f2] <==
	* I1024 19:35:23.520059       1 server_others.go:69] "Using iptables proxy"
	I1024 19:35:23.533191       1 node.go:141] Successfully retrieved node IP: 192.168.39.247
	I1024 19:35:23.628348       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 19:35:23.628404       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 19:35:23.630728       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:35:23.630768       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:35:23.630998       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:35:23.631008       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:35:23.648339       1 config.go:188] "Starting service config controller"
	I1024 19:35:23.648366       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:35:23.648396       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:35:23.648402       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:35:23.655932       1 config.go:315] "Starting node config controller"
	I1024 19:35:23.655947       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:35:23.749055       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:35:23.749184       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:35:23.757031       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [01edc336668941c5bd3d7ad7133c47adbf26947f9882e0bc6b2c685ea317e335] <==
	* I1024 19:35:18.427653       1 serving.go:348] Generated self-signed cert in-memory
	W1024 19:35:21.231411       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 19:35:21.231569       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:35:21.231677       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 19:35:21.231705       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 19:35:21.278726       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:35:21.279060       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:35:21.283674       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:35:21.283718       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:35:21.284352       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:35:21.284396       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:35:21.383919       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:34:44 UTC, ends at Tue 2023-10-24 19:39:02 UTC. --
	Oct 24 19:35:23 multinode-632589 kubelet[920]: E1024 19:35:23.457299     920 projected.go:198] Error preparing data for projected volume kube-api-access-kmc5g for pod default/busybox-5bc68d56bd-ddcjz: object "default"/"kube-root-ca.crt" not registered
	Oct 24 19:35:23 multinode-632589 kubelet[920]: E1024 19:35:23.457353     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b81ca1e-f022-4358-be2c-27042e8503c1-kube-api-access-kmc5g podName:5b81ca1e-f022-4358-be2c-27042e8503c1 nodeName:}" failed. No retries permitted until 2023-10-24 19:35:25.457334605 +0000 UTC m=+10.884394707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kmc5g" (UniqueName: "kubernetes.io/projected/5b81ca1e-f022-4358-be2c-27042e8503c1-kube-api-access-kmc5g") pod "busybox-5bc68d56bd-ddcjz" (UID: "5b81ca1e-f022-4358-be2c-27042e8503c1") : object "default"/"kube-root-ca.crt" not registered
	Oct 24 19:35:23 multinode-632589 kubelet[920]: E1024 19:35:23.829991     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-ddcjz" podUID="5b81ca1e-f022-4358-be2c-27042e8503c1"
	Oct 24 19:35:23 multinode-632589 kubelet[920]: E1024 19:35:23.830174     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-c5l8s" podUID="20aa782d-e6ed-45ad-b625-556d1a8503c0"
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.372211     920 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.372265     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/20aa782d-e6ed-45ad-b625-556d1a8503c0-config-volume podName:20aa782d-e6ed-45ad-b625-556d1a8503c0 nodeName:}" failed. No retries permitted until 2023-10-24 19:35:29.372246363 +0000 UTC m=+14.799306467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/20aa782d-e6ed-45ad-b625-556d1a8503c0-config-volume") pod "coredns-5dd5756b68-c5l8s" (UID: "20aa782d-e6ed-45ad-b625-556d1a8503c0") : object "kube-system"/"coredns" not registered
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.472674     920 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.472699     920 projected.go:198] Error preparing data for projected volume kube-api-access-kmc5g for pod default/busybox-5bc68d56bd-ddcjz: object "default"/"kube-root-ca.crt" not registered
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.472740     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b81ca1e-f022-4358-be2c-27042e8503c1-kube-api-access-kmc5g podName:5b81ca1e-f022-4358-be2c-27042e8503c1 nodeName:}" failed. No retries permitted until 2023-10-24 19:35:29.472727235 +0000 UTC m=+14.899787338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kmc5g" (UniqueName: "kubernetes.io/projected/5b81ca1e-f022-4358-be2c-27042e8503c1-kube-api-access-kmc5g") pod "busybox-5bc68d56bd-ddcjz" (UID: "5b81ca1e-f022-4358-be2c-27042e8503c1") : object "default"/"kube-root-ca.crt" not registered
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.829921     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-c5l8s" podUID="20aa782d-e6ed-45ad-b625-556d1a8503c0"
	Oct 24 19:35:25 multinode-632589 kubelet[920]: E1024 19:35:25.830138     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-ddcjz" podUID="5b81ca1e-f022-4358-be2c-27042e8503c1"
	Oct 24 19:35:27 multinode-632589 kubelet[920]: I1024 19:35:27.091431     920 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 24 19:35:54 multinode-632589 kubelet[920]: I1024 19:35:54.018281     920 scope.go:117] "RemoveContainer" containerID="49e88326d6e44f52f6843aa5c8878ee40d2de2e0868920fb7752f87a8a4efc75"
	Oct 24 19:36:14 multinode-632589 kubelet[920]: E1024 19:36:14.849929     920 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 19:36:14 multinode-632589 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 19:36:14 multinode-632589 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 19:36:14 multinode-632589 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 19:37:14 multinode-632589 kubelet[920]: E1024 19:37:14.850074     920 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 19:37:14 multinode-632589 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 19:37:14 multinode-632589 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 19:37:14 multinode-632589 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 19:38:14 multinode-632589 kubelet[920]: E1024 19:38:14.855022     920 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 19:38:14 multinode-632589 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 19:38:14 multinode-632589 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 19:38:14 multinode-632589 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-632589 -n multinode-632589
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-632589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (689.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 stop
E1024 19:41:00.584285   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-632589 stop: exit status 82 (2m1.53433286s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-632589"  ...
	* Stopping node "multinode-632589"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-632589 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-632589 status: exit status 3 (18.817379751s)

                                                
                                                
-- stdout --
	multinode-632589
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-632589-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:41:25.149590   35389 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host
	E1024 19:41:25.149638   35389 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-632589 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-632589 -n multinode-632589
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-632589 -n multinode-632589: exit status 3 (3.188594181s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:41:28.505669   35492 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host
	E1024 19:41:28.505697   35492 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-632589" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.54s)

                                                
                                    
x
+
TestPreload (185.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-963013 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1024 19:51:00.584549   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:51:13.604243   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-963013 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.483255725s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-963013 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-963013 image pull gcr.io/k8s-minikube/busybox: (1.145724657s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-963013
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-963013: (7.100792393s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-963013 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-963013 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.335717297s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-963013 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-10-24 19:52:48.168555545 +0000 UTC m=+3129.934034555
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-963013 -n test-preload-963013
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-963013 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-963013 logs -n 25: (1.0790239s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n multinode-632589 sudo cat                                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | /home/docker/cp-test_multinode-632589-m03_multinode-632589.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt                       | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m02:/home/docker/cp-test_multinode-632589-m03_multinode-632589-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n                                                                 | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | multinode-632589-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-632589 ssh -n multinode-632589-m02 sudo cat                                   | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | /home/docker/cp-test_multinode-632589-m03_multinode-632589-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-632589 node stop m03                                                          | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	| node    | multinode-632589 node start                                                             | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-632589                                                                | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC |                     |
	| stop    | -p multinode-632589                                                                     | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC |                     |
	| start   | -p multinode-632589                                                                     | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:29 UTC | 24 Oct 23 19:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-632589                                                                | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	| node    | multinode-632589 node delete                                                            | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-632589 stop                                                                   | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	| start   | -p multinode-632589                                                                     | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:41 UTC | 24 Oct 23 19:48 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-632589                                                                | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:48 UTC |                     |
	| start   | -p multinode-632589-m02                                                                 | multinode-632589-m02 | jenkins | v1.31.2 | 24 Oct 23 19:48 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-632589-m03                                                                 | multinode-632589-m03 | jenkins | v1.31.2 | 24 Oct 23 19:48 UTC | 24 Oct 23 19:49 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-632589                                                                 | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:49 UTC |                     |
	| delete  | -p multinode-632589-m03                                                                 | multinode-632589-m03 | jenkins | v1.31.2 | 24 Oct 23 19:49 UTC | 24 Oct 23 19:49 UTC |
	| delete  | -p multinode-632589                                                                     | multinode-632589     | jenkins | v1.31.2 | 24 Oct 23 19:49 UTC | 24 Oct 23 19:49 UTC |
	| start   | -p test-preload-963013                                                                  | test-preload-963013  | jenkins | v1.31.2 | 24 Oct 23 19:49 UTC | 24 Oct 23 19:51 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-963013 image pull                                                          | test-preload-963013  | jenkins | v1.31.2 | 24 Oct 23 19:51 UTC | 24 Oct 23 19:51 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-963013                                                                  | test-preload-963013  | jenkins | v1.31.2 | 24 Oct 23 19:51 UTC | 24 Oct 23 19:51 UTC |
	| start   | -p test-preload-963013                                                                  | test-preload-963013  | jenkins | v1.31.2 | 24 Oct 23 19:51 UTC | 24 Oct 23 19:52 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-963013 image list                                                          | test-preload-963013  | jenkins | v1.31.2 | 24 Oct 23 19:52 UTC | 24 Oct 23 19:52 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:51:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:51:32.652870   38158 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:51:32.653013   38158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:51:32.653022   38158 out.go:309] Setting ErrFile to fd 2...
	I1024 19:51:32.653027   38158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:51:32.653227   38158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:51:32.653747   38158 out.go:303] Setting JSON to false
	I1024 19:51:32.654625   38158 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5391,"bootTime":1698171702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:51:32.654682   38158 start.go:138] virtualization: kvm guest
	I1024 19:51:32.657146   38158 out.go:177] * [test-preload-963013] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:51:32.658638   38158 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:51:32.660012   38158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:51:32.658677   38158 notify.go:220] Checking for updates...
	I1024 19:51:32.663056   38158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:51:32.664520   38158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:51:32.665907   38158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:51:32.667181   38158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:51:32.668843   38158 config.go:182] Loaded profile config "test-preload-963013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1024 19:51:32.669246   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:51:32.669287   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:51:32.683132   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I1024 19:51:32.683509   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:51:32.684028   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:51:32.684049   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:51:32.684366   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:51:32.684517   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:32.686392   38158 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 19:51:32.687814   38158 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:51:32.688074   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:51:32.688109   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:51:32.701678   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1024 19:51:32.702039   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:51:32.702432   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:51:32.702453   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:51:32.702717   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:51:32.702859   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:32.735222   38158 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:51:32.736502   38158 start.go:298] selected driver: kvm2
	I1024 19:51:32.736514   38158 start.go:902] validating driver "kvm2" against &{Name:test-preload-963013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-963013 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:51:32.736642   38158 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:51:32.737354   38158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:51:32.737432   38158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:51:32.750622   38158 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:51:32.750909   38158 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:51:32.750941   38158 cni.go:84] Creating CNI manager for ""
	I1024 19:51:32.750959   38158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:51:32.750975   38158 start_flags.go:323] config:
	{Name:test-preload-963013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-963013 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:51:32.751151   38158 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:51:32.752777   38158 out.go:177] * Starting control plane node test-preload-963013 in cluster test-preload-963013
	I1024 19:51:32.754117   38158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1024 19:51:32.778013   38158 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1024 19:51:32.778030   38158 cache.go:57] Caching tarball of preloaded images
	I1024 19:51:32.778143   38158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1024 19:51:32.779580   38158 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1024 19:51:32.780794   38158 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:51:32.808600   38158 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1024 19:51:36.955593   38158 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:51:36.955683   38158 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:51:37.849858   38158 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1024 19:51:37.850010   38158 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/config.json ...
	I1024 19:51:37.850262   38158 start.go:365] acquiring machines lock for test-preload-963013: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:51:37.850341   38158 start.go:369] acquired machines lock for "test-preload-963013" in 54.733µs
	I1024 19:51:37.850363   38158 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:51:37.850371   38158 fix.go:54] fixHost starting: 
	I1024 19:51:37.850649   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:51:37.850709   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:51:37.864617   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1024 19:51:37.865070   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:51:37.865471   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:51:37.865494   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:51:37.865822   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:51:37.866018   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:37.866151   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetState
	I1024 19:51:37.867788   38158 fix.go:102] recreateIfNeeded on test-preload-963013: state=Stopped err=<nil>
	I1024 19:51:37.867806   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	W1024 19:51:37.867979   38158 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:51:37.870217   38158 out.go:177] * Restarting existing kvm2 VM for "test-preload-963013" ...
	I1024 19:51:37.871578   38158 main.go:141] libmachine: (test-preload-963013) Calling .Start
	I1024 19:51:37.871753   38158 main.go:141] libmachine: (test-preload-963013) Ensuring networks are active...
	I1024 19:51:37.872451   38158 main.go:141] libmachine: (test-preload-963013) Ensuring network default is active
	I1024 19:51:37.872732   38158 main.go:141] libmachine: (test-preload-963013) Ensuring network mk-test-preload-963013 is active
	I1024 19:51:37.873081   38158 main.go:141] libmachine: (test-preload-963013) Getting domain xml...
	I1024 19:51:37.873785   38158 main.go:141] libmachine: (test-preload-963013) Creating domain...
	I1024 19:51:39.066253   38158 main.go:141] libmachine: (test-preload-963013) Waiting to get IP...
	I1024 19:51:39.067019   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:39.067387   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:39.067508   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:39.067386   38205 retry.go:31] will retry after 306.85514ms: waiting for machine to come up
	I1024 19:51:39.376175   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:39.376619   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:39.376637   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:39.376605   38205 retry.go:31] will retry after 282.877292ms: waiting for machine to come up
	I1024 19:51:39.661100   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:39.661546   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:39.661572   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:39.661503   38205 retry.go:31] will retry after 425.489713ms: waiting for machine to come up
	I1024 19:51:40.088060   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:40.088465   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:40.088492   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:40.088387   38205 retry.go:31] will retry after 472.254603ms: waiting for machine to come up
	I1024 19:51:40.561906   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:40.562287   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:40.562315   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:40.562237   38205 retry.go:31] will retry after 538.895177ms: waiting for machine to come up
	I1024 19:51:41.102933   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:41.103367   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:41.103389   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:41.103333   38205 retry.go:31] will retry after 921.320537ms: waiting for machine to come up
	I1024 19:51:42.026346   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:42.026724   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:42.026754   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:42.026660   38205 retry.go:31] will retry after 894.260205ms: waiting for machine to come up
	I1024 19:51:42.922812   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:42.923193   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:42.923228   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:42.923135   38205 retry.go:31] will retry after 1.470101836s: waiting for machine to come up
	I1024 19:51:44.395752   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:44.396082   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:44.396106   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:44.396046   38205 retry.go:31] will retry after 1.52282554s: waiting for machine to come up
	I1024 19:51:45.920691   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:45.921134   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:45.921161   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:45.921090   38205 retry.go:31] will retry after 2.108224796s: waiting for machine to come up
	I1024 19:51:48.030992   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:48.031321   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:48.031347   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:48.031305   38205 retry.go:31] will retry after 2.518825241s: waiting for machine to come up
	I1024 19:51:50.553243   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:50.553708   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:50.553740   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:50.553672   38205 retry.go:31] will retry after 2.66634786s: waiting for machine to come up
	I1024 19:51:53.221272   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:53.221700   38158 main.go:141] libmachine: (test-preload-963013) DBG | unable to find current IP address of domain test-preload-963013 in network mk-test-preload-963013
	I1024 19:51:53.221735   38158 main.go:141] libmachine: (test-preload-963013) DBG | I1024 19:51:53.221625   38205 retry.go:31] will retry after 4.319399686s: waiting for machine to come up
	I1024 19:51:57.546240   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.546707   38158 main.go:141] libmachine: (test-preload-963013) Found IP for machine: 192.168.39.204
	I1024 19:51:57.546739   38158 main.go:141] libmachine: (test-preload-963013) Reserving static IP address...
	I1024 19:51:57.546755   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has current primary IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.547194   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "test-preload-963013", mac: "52:54:00:bc:ad:83", ip: "192.168.39.204"} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.547219   38158 main.go:141] libmachine: (test-preload-963013) Reserved static IP address: 192.168.39.204
	I1024 19:51:57.547238   38158 main.go:141] libmachine: (test-preload-963013) DBG | skip adding static IP to network mk-test-preload-963013 - found existing host DHCP lease matching {name: "test-preload-963013", mac: "52:54:00:bc:ad:83", ip: "192.168.39.204"}
	I1024 19:51:57.547257   38158 main.go:141] libmachine: (test-preload-963013) Waiting for SSH to be available...
	I1024 19:51:57.547265   38158 main.go:141] libmachine: (test-preload-963013) DBG | Getting to WaitForSSH function...
	I1024 19:51:57.549371   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.549684   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.549718   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.549825   38158 main.go:141] libmachine: (test-preload-963013) DBG | Using SSH client type: external
	I1024 19:51:57.549855   38158 main.go:141] libmachine: (test-preload-963013) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa (-rw-------)
	I1024 19:51:57.549899   38158 main.go:141] libmachine: (test-preload-963013) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:51:57.549963   38158 main.go:141] libmachine: (test-preload-963013) DBG | About to run SSH command:
	I1024 19:51:57.549991   38158 main.go:141] libmachine: (test-preload-963013) DBG | exit 0
	I1024 19:51:57.640696   38158 main.go:141] libmachine: (test-preload-963013) DBG | SSH cmd err, output: <nil>: 
	I1024 19:51:57.641135   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetConfigRaw
	I1024 19:51:57.641773   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetIP
	I1024 19:51:57.644182   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.644480   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.644515   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.644768   38158 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/config.json ...
	I1024 19:51:57.644973   38158 machine.go:88] provisioning docker machine ...
	I1024 19:51:57.644997   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:57.645191   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetMachineName
	I1024 19:51:57.645363   38158 buildroot.go:166] provisioning hostname "test-preload-963013"
	I1024 19:51:57.645384   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetMachineName
	I1024 19:51:57.645529   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:57.647626   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.647932   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.647966   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.648036   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:57.648220   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:57.648469   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:57.648612   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:57.648761   38158 main.go:141] libmachine: Using SSH client type: native
	I1024 19:51:57.649081   38158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1024 19:51:57.649094   38158 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-963013 && echo "test-preload-963013" | sudo tee /etc/hostname
	I1024 19:51:57.773210   38158 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-963013
	
	I1024 19:51:57.773231   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:57.775649   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.775945   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.775989   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.776141   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:57.776337   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:57.776518   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:57.776658   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:57.776831   38158 main.go:141] libmachine: Using SSH client type: native
	I1024 19:51:57.777134   38158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1024 19:51:57.777151   38158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-963013' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-963013/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-963013' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:51:57.897492   38158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:51:57.897521   38158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:51:57.897546   38158 buildroot.go:174] setting up certificates
	I1024 19:51:57.897557   38158 provision.go:83] configureAuth start
	I1024 19:51:57.897572   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetMachineName
	I1024 19:51:57.897782   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetIP
	I1024 19:51:57.900328   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.900783   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.900805   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.900966   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:57.903092   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.903429   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:57.903480   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:57.903556   38158 provision.go:138] copyHostCerts
	I1024 19:51:57.903619   38158 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:51:57.903648   38158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:51:57.903715   38158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:51:57.903796   38158 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:51:57.903804   38158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:51:57.903826   38158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:51:57.903883   38158 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:51:57.903890   38158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:51:57.903912   38158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:51:57.903954   38158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.test-preload-963013 san=[192.168.39.204 192.168.39.204 localhost 127.0.0.1 minikube test-preload-963013]
	I1024 19:51:58.012374   38158 provision.go:172] copyRemoteCerts
	I1024 19:51:58.012441   38158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:51:58.012462   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:58.015285   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.015551   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.015576   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.015755   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:58.015912   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.016061   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:58.016143   38158 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa Username:docker}
	I1024 19:51:58.102538   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:51:58.127502   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1024 19:51:58.151191   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:51:58.175218   38158 provision.go:86] duration metric: configureAuth took 277.645604ms
	I1024 19:51:58.175254   38158 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:51:58.175446   38158 config.go:182] Loaded profile config "test-preload-963013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1024 19:51:58.175561   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:58.177978   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.178379   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.178414   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.178572   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:58.178785   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.178959   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.179102   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:58.179226   38158 main.go:141] libmachine: Using SSH client type: native
	I1024 19:51:58.179558   38158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1024 19:51:58.179581   38158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:51:58.471157   38158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:51:58.471180   38158 machine.go:91] provisioned docker machine in 826.189945ms
	I1024 19:51:58.471191   38158 start.go:300] post-start starting for "test-preload-963013" (driver="kvm2")
	I1024 19:51:58.471205   38158 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:51:58.471225   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:58.471538   38158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:51:58.471575   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:58.474093   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.474502   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.474550   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.474685   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:58.474884   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.475032   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:58.475174   38158 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa Username:docker}
	I1024 19:51:58.563692   38158 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:51:58.567793   38158 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:51:58.567815   38158 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:51:58.567890   38158 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:51:58.567998   38158 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:51:58.568112   38158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:51:58.576926   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:51:58.599185   38158 start.go:303] post-start completed in 127.977509ms
	I1024 19:51:58.599217   38158 fix.go:56] fixHost completed within 20.748846103s
	I1024 19:51:58.599248   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:58.601953   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.602258   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.602287   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.602465   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:58.602689   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.602862   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.602980   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:58.603152   38158 main.go:141] libmachine: Using SSH client type: native
	I1024 19:51:58.603617   38158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1024 19:51:58.603646   38158 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:51:58.718502   38158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698177118.668271474
	
	I1024 19:51:58.718526   38158 fix.go:206] guest clock: 1698177118.668271474
	I1024 19:51:58.718536   38158 fix.go:219] Guest: 2023-10-24 19:51:58.668271474 +0000 UTC Remote: 2023-10-24 19:51:58.599222284 +0000 UTC m=+25.992204986 (delta=69.04919ms)
	I1024 19:51:58.718560   38158 fix.go:190] guest clock delta is within tolerance: 69.04919ms
	I1024 19:51:58.718579   38158 start.go:83] releasing machines lock for "test-preload-963013", held for 20.868223377s
	I1024 19:51:58.718607   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:58.718894   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetIP
	I1024 19:51:58.721500   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.721783   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.721819   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.721909   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:58.722392   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:58.722542   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:51:58.722616   38158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:51:58.722660   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:58.722706   38158 ssh_runner.go:195] Run: cat /version.json
	I1024 19:51:58.722729   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:51:58.725199   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.725575   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.725669   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.725691   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.725872   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:58.726045   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.726085   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:51:58.726116   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:51:58.726206   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:58.726277   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:51:58.726351   38158 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa Username:docker}
	I1024 19:51:58.726452   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:51:58.726588   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:51:58.726751   38158 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa Username:docker}
	I1024 19:51:58.810460   38158 ssh_runner.go:195] Run: systemctl --version
	I1024 19:51:58.832725   38158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:51:58.975244   38158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:51:58.981064   38158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:51:58.981124   38158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:51:58.995313   38158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:51:58.995334   38158 start.go:472] detecting cgroup driver to use...
	I1024 19:51:58.995399   38158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:51:59.008716   38158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:51:59.020820   38158 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:51:59.020859   38158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:51:59.033423   38158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:51:59.046364   38158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:51:59.161196   38158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:51:59.269027   38158 docker.go:214] disabling docker service ...
	I1024 19:51:59.269092   38158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:51:59.281410   38158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:51:59.292722   38158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:51:59.398071   38158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:51:59.502470   38158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:51:59.513909   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:51:59.529288   38158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1024 19:51:59.529354   38158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:51:59.537814   38158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:51:59.537870   38158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:51:59.546563   38158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:51:59.556074   38158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:51:59.565777   38158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:51:59.575568   38158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:51:59.583916   38158 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 19:51:59.583963   38158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 19:51:59.597175   38158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:51:59.605181   38158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:51:59.706789   38158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:51:59.868803   38158 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:51:59.868871   38158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:51:59.874048   38158 start.go:540] Will wait 60s for crictl version
	I1024 19:51:59.874097   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:51:59.877667   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:51:59.924740   38158 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 19:51:59.924815   38158 ssh_runner.go:195] Run: crio --version
	I1024 19:51:59.971268   38158 ssh_runner.go:195] Run: crio --version
	I1024 19:52:00.027538   38158 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1024 19:52:00.029196   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetIP
	I1024 19:52:00.031870   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:52:00.032163   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:52:00.032197   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:52:00.032341   38158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 19:52:00.036080   38158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:52:00.048201   38158 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1024 19:52:00.048248   38158 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:52:00.086395   38158 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1024 19:52:00.086466   38158 ssh_runner.go:195] Run: which lz4
	I1024 19:52:00.090118   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:52:00.094095   38158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:52:00.094119   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1024 19:52:02.047227   38158 crio.go:444] Took 1.957128 seconds to copy over tarball
	I1024 19:52:02.047294   38158 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:52:05.092066   38158 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044729177s)
	I1024 19:52:05.092096   38158 crio.go:451] Took 3.044848 seconds to extract the tarball
	I1024 19:52:05.092107   38158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:52:05.132550   38158 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:52:05.177525   38158 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1024 19:52:05.177546   38158 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 19:52:05.177619   38158 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:52:05.177653   38158 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1024 19:52:05.177668   38158 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1024 19:52:05.177685   38158 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1024 19:52:05.177693   38158 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1024 19:52:05.177619   38158 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1024 19:52:05.177661   38158 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1024 19:52:05.177828   38158 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1024 19:52:05.178943   38158 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1024 19:52:05.178947   38158 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1024 19:52:05.178957   38158 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1024 19:52:05.178967   38158 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1024 19:52:05.178971   38158 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1024 19:52:05.178973   38158 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1024 19:52:05.178947   38158 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:52:05.179007   38158 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1024 19:52:05.338299   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1024 19:52:05.342591   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1024 19:52:05.345878   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1024 19:52:05.348789   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1024 19:52:05.372024   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1024 19:52:05.384854   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1024 19:52:05.406335   38158 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1024 19:52:05.406379   38158 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1024 19:52:05.406425   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.436449   38158 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1024 19:52:05.436487   38158 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1024 19:52:05.436542   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.440357   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1024 19:52:05.471236   38158 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1024 19:52:05.471281   38158 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1024 19:52:05.471316   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.480889   38158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:52:05.485796   38158 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1024 19:52:05.485836   38158 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1024 19:52:05.485891   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.511424   38158 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1024 19:52:05.511457   38158 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1024 19:52:05.511514   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.534349   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1024 19:52:05.534433   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1024 19:52:05.534566   38158 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1024 19:52:05.534602   38158 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1024 19:52:05.534650   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.571403   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1024 19:52:05.571503   38158 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1024 19:52:05.571538   38158 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1024 19:52:05.571572   38158 ssh_runner.go:195] Run: which crictl
	I1024 19:52:05.684894   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1024 19:52:05.684929   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1024 19:52:05.684996   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1024 19:52:05.685058   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1024 19:52:05.685086   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1024 19:52:05.685116   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1024 19:52:05.685134   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1024 19:52:05.685181   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1024 19:52:05.685220   38158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1024 19:52:05.685263   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1024 19:52:05.779198   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1024 19:52:05.779251   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1024 19:52:05.779294   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1024 19:52:05.779344   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1024 19:52:05.779357   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1024 19:52:05.779370   38158 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1024 19:52:05.779410   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1024 19:52:05.779440   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1024 19:52:05.779480   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1024 19:52:05.779499   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1024 19:52:05.791765   38158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1024 19:52:05.791803   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1024 19:52:05.791865   38158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1024 19:52:07.749998   38158 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (1.970499377s)
	I1024 19:52:07.750030   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1024 19:52:07.750069   38158 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (1.970607505s)
	I1024 19:52:07.750088   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1024 19:52:07.750111   38158 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1024 19:52:07.750123   38158 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (1.970764381s)
	I1024 19:52:07.750133   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1024 19:52:07.750090   38158 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.970787063s)
	I1024 19:52:07.750144   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1024 19:52:07.750159   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1024 19:52:07.750174   38158 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (1.958290164s)
	I1024 19:52:07.750195   38158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1024 19:52:08.502844   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1024 19:52:08.502888   38158 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1024 19:52:08.502967   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1024 19:52:09.249346   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1024 19:52:09.249384   38158 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1024 19:52:09.249446   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1024 19:52:09.691019   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1024 19:52:09.691065   38158 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1024 19:52:09.691119   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1024 19:52:10.634303   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1024 19:52:10.634364   38158 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1024 19:52:10.634417   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1024 19:52:11.081796   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1024 19:52:11.081836   38158 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1024 19:52:11.081899   38158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1024 19:52:13.334113   38158 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.25218134s)
	I1024 19:52:13.334166   38158 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1024 19:52:13.334200   38158 cache_images.go:123] Successfully loaded all cached images
	I1024 19:52:13.334207   38158 cache_images.go:92] LoadImages completed in 8.156648586s
	I1024 19:52:13.334297   38158 ssh_runner.go:195] Run: crio config
	I1024 19:52:13.388622   38158 cni.go:84] Creating CNI manager for ""
	I1024 19:52:13.388640   38158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:52:13.388655   38158 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:52:13.388673   38158 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-963013 NodeName:test-preload-963013 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:52:13.388805   38158 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-963013"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:52:13.388867   38158 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-963013 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-963013 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:52:13.388914   38158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1024 19:52:13.397828   38158 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:52:13.397905   38158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:52:13.406232   38158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1024 19:52:13.422460   38158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:52:13.437602   38158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1024 19:52:13.454155   38158 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I1024 19:52:13.457863   38158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:52:13.470407   38158 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013 for IP: 192.168.39.204
	I1024 19:52:13.470440   38158 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:52:13.470583   38158 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 19:52:13.470617   38158 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 19:52:13.470688   38158 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.key
	I1024 19:52:13.470748   38158 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/apiserver.key.a58e8f41
	I1024 19:52:13.470782   38158 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/proxy-client.key
	I1024 19:52:13.470886   38158 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 19:52:13.470914   38158 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 19:52:13.470921   38158 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:52:13.470942   38158 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 19:52:13.470963   38158 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:52:13.470996   38158 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 19:52:13.471056   38158 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:52:13.471725   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:52:13.496651   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:52:13.520399   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:52:13.542873   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 19:52:13.566774   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:52:13.590501   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:52:13.613547   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:52:13.636054   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 19:52:13.658497   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 19:52:13.680530   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:52:13.703357   38158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 19:52:13.725048   38158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:52:13.741269   38158 ssh_runner.go:195] Run: openssl version
	I1024 19:52:13.746662   38158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:52:13.756432   38158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:52:13.760904   38158 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:52:13.760957   38158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:52:13.766317   38158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:52:13.776173   38158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 19:52:13.786062   38158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 19:52:13.790680   38158 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 19:52:13.790729   38158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 19:52:13.796118   38158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 19:52:13.805602   38158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 19:52:13.815871   38158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 19:52:13.820482   38158 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 19:52:13.820542   38158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 19:52:13.826031   38158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:52:13.835792   38158 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:52:13.840456   38158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 19:52:13.846590   38158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 19:52:13.852558   38158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 19:52:13.858411   38158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 19:52:13.864479   38158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 19:52:13.870336   38158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 19:52:13.876419   38158 kubeadm.go:404] StartCluster: {Name:test-preload-963013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-963013 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:52:13.876527   38158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:52:13.876583   38158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:52:13.917808   38158 cri.go:89] found id: ""
	I1024 19:52:13.917894   38158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:52:13.927725   38158 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 19:52:13.927748   38158 kubeadm.go:636] restartCluster start
	I1024 19:52:13.927803   38158 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 19:52:13.936775   38158 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:13.937271   38158 kubeconfig.go:135] verify returned: extract IP: "test-preload-963013" does not appear in /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:52:13.937428   38158 kubeconfig.go:146] "test-preload-963013" context is missing from /home/jenkins/minikube-integration/17485-9023/kubeconfig - will repair!
	I1024 19:52:13.937685   38158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:52:13.938299   38158 kapi.go:59] client config for test-preload-963013: &rest.Config{Host:"https://192.168.39.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:52:13.938999   38158 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 19:52:13.947645   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:13.947705   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:13.958933   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:13.958950   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:13.958986   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:13.969683   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:14.470442   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:14.470558   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:14.481825   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:14.970501   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:14.970593   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:14.982954   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:15.470023   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:15.470114   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:15.481263   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:15.970436   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:15.970505   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:15.981912   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:16.470568   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:16.470631   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:16.481967   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:16.970569   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:16.970647   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:16.982540   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:17.470082   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:17.470146   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:17.481159   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:17.970318   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:17.970384   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:17.982200   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:18.469792   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:18.469867   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:18.481272   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:18.969825   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:18.969943   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:18.981387   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:19.469891   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:19.469970   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:19.481107   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:19.970784   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:19.970892   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:19.982237   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:20.469849   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:20.469925   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:20.481031   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:20.970423   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:20.970484   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:20.982227   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:21.470453   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:21.470534   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:21.487325   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:21.969820   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:21.969919   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:21.982742   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:22.470281   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:22.470397   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:22.483455   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:22.970720   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:22.970816   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:22.982323   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:23.469920   38158 api_server.go:166] Checking apiserver status ...
	I1024 19:52:23.469996   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:52:23.482357   38158 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:52:23.948044   38158 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 19:52:23.948076   38158 kubeadm.go:1128] stopping kube-system containers ...
	I1024 19:52:23.948089   38158 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 19:52:23.948166   38158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:52:23.994119   38158 cri.go:89] found id: ""
	I1024 19:52:23.994179   38158 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 19:52:24.011146   38158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:52:24.021067   38158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:52:24.021125   38158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:52:24.031299   38158 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 19:52:24.031321   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:52:24.124415   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:52:25.228287   38158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.10383382s)
	I1024 19:52:25.228318   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:52:25.573606   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:52:25.667202   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:52:25.796935   38158 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:52:25.797053   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:25.813818   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:26.328473   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:26.828820   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:27.328866   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:27.828445   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:27.866817   38158 api_server.go:72] duration metric: took 2.069884439s to wait for apiserver process to appear ...
	I1024 19:52:27.866837   38158 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:52:27.866854   38158 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1024 19:52:32.867710   38158 api_server.go:269] stopped: https://192.168.39.204:8443/healthz: Get "https://192.168.39.204:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1024 19:52:32.867749   38158 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1024 19:52:32.896155   38158 api_server.go:279] https://192.168.39.204:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:52:32.896180   38158 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:52:33.396942   38158 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1024 19:52:33.402972   38158 api_server.go:279] https://192.168.39.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1024 19:52:33.402997   38158 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1024 19:52:33.896507   38158 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1024 19:52:33.903660   38158 api_server.go:279] https://192.168.39.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1024 19:52:33.903696   38158 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1024 19:52:34.396565   38158 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1024 19:52:34.403304   38158 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I1024 19:52:34.410860   38158 api_server.go:141] control plane version: v1.24.4
	I1024 19:52:34.410883   38158 api_server.go:131] duration metric: took 6.544040066s to wait for apiserver health ...
	I1024 19:52:34.410891   38158 cni.go:84] Creating CNI manager for ""
	I1024 19:52:34.410897   38158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:52:34.412636   38158 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:52:34.414005   38158 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:52:34.425280   38158 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:52:34.444602   38158 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:52:34.454235   38158 system_pods.go:59] 8 kube-system pods found
	I1024 19:52:34.454261   38158 system_pods.go:61] "coredns-6d4b75cb6d-nlbsm" [4e4411e1-4dc8-4424-abcd-567f211631dd] Running
	I1024 19:52:34.454266   38158 system_pods.go:61] "coredns-6d4b75cb6d-vrmmb" [a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1] Running
	I1024 19:52:34.454272   38158 system_pods.go:61] "etcd-test-preload-963013" [a94abc9c-a121-4ede-8b07-56296a4ea8c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 19:52:34.454279   38158 system_pods.go:61] "kube-apiserver-test-preload-963013" [62a70e30-81eb-44e7-b8a5-c0e8c6f420a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 19:52:34.454285   38158 system_pods.go:61] "kube-controller-manager-test-preload-963013" [02bd0782-eb73-481d-b773-5223fc7f8b7c] Running
	I1024 19:52:34.454290   38158 system_pods.go:61] "kube-proxy-hg9gw" [709820ae-b9e4-4c6d-b7bc-88f108fa986b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 19:52:34.454294   38158 system_pods.go:61] "kube-scheduler-test-preload-963013" [0fe3b5f8-0017-4711-ac1a-0304266ded87] Running
	I1024 19:52:34.454299   38158 system_pods.go:61] "storage-provisioner" [a8ecb3ab-719c-4623-8af1-422fb0a84baf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:52:34.454306   38158 system_pods.go:74] duration metric: took 9.685659ms to wait for pod list to return data ...
	I1024 19:52:34.454328   38158 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:52:34.457833   38158 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:52:34.457863   38158 node_conditions.go:123] node cpu capacity is 2
	I1024 19:52:34.457873   38158 node_conditions.go:105] duration metric: took 3.54088ms to run NodePressure ...
	I1024 19:52:34.457915   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:52:34.668169   38158 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 19:52:34.680722   38158 kubeadm.go:787] kubelet initialised
	I1024 19:52:34.680741   38158 kubeadm.go:788] duration metric: took 12.551766ms waiting for restarted kubelet to initialise ...
	I1024 19:52:34.680748   38158 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:52:34.689994   38158 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-nlbsm" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:34.695041   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "coredns-6d4b75cb6d-nlbsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.695108   38158 pod_ready.go:81] duration metric: took 5.09571ms waiting for pod "coredns-6d4b75cb6d-nlbsm" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:34.695121   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "coredns-6d4b75cb6d-nlbsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.695142   38158 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:34.713599   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.713633   38158 pod_ready.go:81] duration metric: took 18.476423ms waiting for pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:34.713645   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.713657   38158 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:34.722579   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "etcd-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.722606   38158 pod_ready.go:81] duration metric: took 8.935086ms waiting for pod "etcd-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:34.722618   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "etcd-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.722629   38158 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:34.848015   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "kube-apiserver-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.848044   38158 pod_ready.go:81] duration metric: took 125.404892ms waiting for pod "kube-apiserver-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:34.848055   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "kube-apiserver-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:34.848065   38158 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:35.247937   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:35.247960   38158 pod_ready.go:81] duration metric: took 399.88699ms waiting for pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:35.247969   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:35.247976   38158 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hg9gw" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:35.650796   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "kube-proxy-hg9gw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:35.650824   38158 pod_ready.go:81] duration metric: took 402.838339ms waiting for pod "kube-proxy-hg9gw" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:35.650833   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "kube-proxy-hg9gw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:35.650838   38158 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:36.049650   38158 pod_ready.go:97] node "test-preload-963013" hosting pod "kube-scheduler-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:36.049682   38158 pod_ready.go:81] duration metric: took 398.836373ms waiting for pod "kube-scheduler-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	E1024 19:52:36.049694   38158 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-963013" hosting pod "kube-scheduler-test-preload-963013" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:36.049705   38158 pod_ready.go:38] duration metric: took 1.368947662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:52:36.049724   38158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:52:36.060838   38158 ops.go:34] apiserver oom_adj: -16
	I1024 19:52:36.060858   38158 kubeadm.go:640] restartCluster took 22.133102688s
	I1024 19:52:36.060867   38158 kubeadm.go:406] StartCluster complete in 22.184454898s
	I1024 19:52:36.060886   38158 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:52:36.060955   38158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:52:36.062037   38158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:52:36.062276   38158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:52:36.062425   38158 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:52:36.062522   38158 config.go:182] Loaded profile config "test-preload-963013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1024 19:52:36.062527   38158 addons.go:69] Setting storage-provisioner=true in profile "test-preload-963013"
	I1024 19:52:36.062586   38158 addons.go:231] Setting addon storage-provisioner=true in "test-preload-963013"
	W1024 19:52:36.062602   38158 addons.go:240] addon storage-provisioner should already be in state true
	I1024 19:52:36.062654   38158 host.go:66] Checking if "test-preload-963013" exists ...
	I1024 19:52:36.062529   38158 addons.go:69] Setting default-storageclass=true in profile "test-preload-963013"
	I1024 19:52:36.062704   38158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-963013"
	I1024 19:52:36.063033   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:52:36.063074   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:52:36.063012   38158 kapi.go:59] client config for test-preload-963013: &rest.Config{Host:"https://192.168.39.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:52:36.063135   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:52:36.063181   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:52:36.066778   38158 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-963013" context rescaled to 1 replicas
	I1024 19:52:36.066810   38158 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:52:36.068988   38158 out.go:177] * Verifying Kubernetes components...
	I1024 19:52:36.070624   38158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:52:36.078592   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1024 19:52:36.079005   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:52:36.079500   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:52:36.079528   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:52:36.079857   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:52:36.080056   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetState
	I1024 19:52:36.081874   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I1024 19:52:36.082256   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:52:36.082590   38158 kapi.go:59] client config for test-preload-963013: &rest.Config{Host:"https://192.168.39.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/profiles/test-preload-963013/client.key", CAFile:"/home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:52:36.082698   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:52:36.082723   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:52:36.082889   38158 addons.go:231] Setting addon default-storageclass=true in "test-preload-963013"
	W1024 19:52:36.082906   38158 addons.go:240] addon default-storageclass should already be in state true
	I1024 19:52:36.082933   38158 host.go:66] Checking if "test-preload-963013" exists ...
	I1024 19:52:36.083073   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:52:36.083354   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:52:36.083401   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:52:36.083632   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:52:36.083695   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:52:36.097849   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I1024 19:52:36.097959   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I1024 19:52:36.098279   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:52:36.098391   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:52:36.098720   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:52:36.098736   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:52:36.098815   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:52:36.098824   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:52:36.099010   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:52:36.099253   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:52:36.099439   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetState
	I1024 19:52:36.099584   38158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:52:36.099628   38158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:52:36.100964   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:52:36.103040   38158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:52:36.104432   38158 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:52:36.104448   38158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:52:36.104471   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:52:36.107452   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:52:36.107957   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:52:36.107999   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:52:36.108265   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:52:36.108425   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:52:36.108567   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:52:36.108699   38158 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa Username:docker}
	I1024 19:52:36.116187   38158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33095
	I1024 19:52:36.116556   38158 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:52:36.117014   38158 main.go:141] libmachine: Using API Version  1
	I1024 19:52:36.117040   38158 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:52:36.117391   38158 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:52:36.117557   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetState
	I1024 19:52:36.118858   38158 main.go:141] libmachine: (test-preload-963013) Calling .DriverName
	I1024 19:52:36.119085   38158 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:52:36.119099   38158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:52:36.119111   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHHostname
	I1024 19:52:36.121644   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:52:36.121945   38158 main.go:141] libmachine: (test-preload-963013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:ad:83", ip: ""} in network mk-test-preload-963013: {Iface:virbr1 ExpiryTime:2023-10-24 20:51:50 +0000 UTC Type:0 Mac:52:54:00:bc:ad:83 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-963013 Clientid:01:52:54:00:bc:ad:83}
	I1024 19:52:36.121974   38158 main.go:141] libmachine: (test-preload-963013) DBG | domain test-preload-963013 has defined IP address 192.168.39.204 and MAC address 52:54:00:bc:ad:83 in network mk-test-preload-963013
	I1024 19:52:36.122170   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHPort
	I1024 19:52:36.122309   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHKeyPath
	I1024 19:52:36.122422   38158 main.go:141] libmachine: (test-preload-963013) Calling .GetSSHUsername
	I1024 19:52:36.122536   38158 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/test-preload-963013/id_rsa Username:docker}
	I1024 19:52:36.247288   38158 node_ready.go:35] waiting up to 6m0s for node "test-preload-963013" to be "Ready" ...
	I1024 19:52:36.247699   38158 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:52:36.259787   38158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:52:36.266943   38158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:52:37.152721   38158 main.go:141] libmachine: Making call to close driver server
	I1024 19:52:37.152747   38158 main.go:141] libmachine: (test-preload-963013) Calling .Close
	I1024 19:52:37.153131   38158 main.go:141] libmachine: (test-preload-963013) DBG | Closing plugin on server side
	I1024 19:52:37.153159   38158 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:52:37.153177   38158 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:52:37.153195   38158 main.go:141] libmachine: Making call to close driver server
	I1024 19:52:37.153223   38158 main.go:141] libmachine: (test-preload-963013) Calling .Close
	I1024 19:52:37.153440   38158 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:52:37.153470   38158 main.go:141] libmachine: (test-preload-963013) DBG | Closing plugin on server side
	I1024 19:52:37.153505   38158 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:52:37.164556   38158 main.go:141] libmachine: Making call to close driver server
	I1024 19:52:37.164572   38158 main.go:141] libmachine: (test-preload-963013) Calling .Close
	I1024 19:52:37.164821   38158 main.go:141] libmachine: (test-preload-963013) DBG | Closing plugin on server side
	I1024 19:52:37.164865   38158 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:52:37.164881   38158 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:52:37.164896   38158 main.go:141] libmachine: Making call to close driver server
	I1024 19:52:37.164909   38158 main.go:141] libmachine: (test-preload-963013) Calling .Close
	I1024 19:52:37.165132   38158 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:52:37.165148   38158 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:52:37.171474   38158 main.go:141] libmachine: Making call to close driver server
	I1024 19:52:37.171488   38158 main.go:141] libmachine: (test-preload-963013) Calling .Close
	I1024 19:52:37.171722   38158 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:52:37.171738   38158 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:52:37.174241   38158 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 19:52:37.176085   38158 addons.go:502] enable addons completed in 1.113668784s: enabled=[storage-provisioner default-storageclass]
	I1024 19:52:38.456494   38158 node_ready.go:58] node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:40.955472   38158 node_ready.go:58] node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:42.956024   38158 node_ready.go:58] node "test-preload-963013" has status "Ready":"False"
	I1024 19:52:43.456083   38158 node_ready.go:49] node "test-preload-963013" has status "Ready":"True"
	I1024 19:52:43.456106   38158 node_ready.go:38] duration metric: took 7.208791203s waiting for node "test-preload-963013" to be "Ready" ...
	I1024 19:52:43.456114   38158 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:52:43.462670   38158 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:43.468530   38158 pod_ready.go:92] pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:52:43.468547   38158 pod_ready.go:81] duration metric: took 5.857709ms waiting for pod "coredns-6d4b75cb6d-vrmmb" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:43.468554   38158 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:45.486236   38158 pod_ready.go:102] pod "etcd-test-preload-963013" in "kube-system" namespace has status "Ready":"False"
	I1024 19:52:46.487276   38158 pod_ready.go:92] pod "etcd-test-preload-963013" in "kube-system" namespace has status "Ready":"True"
	I1024 19:52:46.487296   38158 pod_ready.go:81] duration metric: took 3.01873561s waiting for pod "etcd-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.487305   38158 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.492431   38158 pod_ready.go:92] pod "kube-apiserver-test-preload-963013" in "kube-system" namespace has status "Ready":"True"
	I1024 19:52:46.492447   38158 pod_ready.go:81] duration metric: took 5.135882ms waiting for pod "kube-apiserver-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.492460   38158 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.498128   38158 pod_ready.go:92] pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace has status "Ready":"True"
	I1024 19:52:46.498142   38158 pod_ready.go:81] duration metric: took 5.677162ms waiting for pod "kube-controller-manager-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.498150   38158 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg9gw" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.656889   38158 pod_ready.go:92] pod "kube-proxy-hg9gw" in "kube-system" namespace has status "Ready":"True"
	I1024 19:52:46.656913   38158 pod_ready.go:81] duration metric: took 158.757468ms waiting for pod "kube-proxy-hg9gw" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:46.656923   38158 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:47.056339   38158 pod_ready.go:92] pod "kube-scheduler-test-preload-963013" in "kube-system" namespace has status "Ready":"True"
	I1024 19:52:47.056364   38158 pod_ready.go:81] duration metric: took 399.432683ms waiting for pod "kube-scheduler-test-preload-963013" in "kube-system" namespace to be "Ready" ...
	I1024 19:52:47.056375   38158 pod_ready.go:38] duration metric: took 3.600252217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:52:47.056409   38158 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:52:47.056559   38158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:52:47.069510   38158 api_server.go:72] duration metric: took 11.002645441s to wait for apiserver process to appear ...
	I1024 19:52:47.069531   38158 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:52:47.069549   38158 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1024 19:52:47.075471   38158 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I1024 19:52:47.076333   38158 api_server.go:141] control plane version: v1.24.4
	I1024 19:52:47.076349   38158 api_server.go:131] duration metric: took 6.812039ms to wait for apiserver health ...
	I1024 19:52:47.076368   38158 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:52:47.258797   38158 system_pods.go:59] 7 kube-system pods found
	I1024 19:52:47.258831   38158 system_pods.go:61] "coredns-6d4b75cb6d-vrmmb" [a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1] Running
	I1024 19:52:47.258838   38158 system_pods.go:61] "etcd-test-preload-963013" [a94abc9c-a121-4ede-8b07-56296a4ea8c7] Running
	I1024 19:52:47.258844   38158 system_pods.go:61] "kube-apiserver-test-preload-963013" [62a70e30-81eb-44e7-b8a5-c0e8c6f420a0] Running
	I1024 19:52:47.258851   38158 system_pods.go:61] "kube-controller-manager-test-preload-963013" [02bd0782-eb73-481d-b773-5223fc7f8b7c] Running
	I1024 19:52:47.258856   38158 system_pods.go:61] "kube-proxy-hg9gw" [709820ae-b9e4-4c6d-b7bc-88f108fa986b] Running
	I1024 19:52:47.258861   38158 system_pods.go:61] "kube-scheduler-test-preload-963013" [0fe3b5f8-0017-4711-ac1a-0304266ded87] Running
	I1024 19:52:47.258866   38158 system_pods.go:61] "storage-provisioner" [a8ecb3ab-719c-4623-8af1-422fb0a84baf] Running
	I1024 19:52:47.258874   38158 system_pods.go:74] duration metric: took 182.498815ms to wait for pod list to return data ...
	I1024 19:52:47.258883   38158 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:52:47.455995   38158 default_sa.go:45] found service account: "default"
	I1024 19:52:47.456024   38158 default_sa.go:55] duration metric: took 197.13356ms for default service account to be created ...
	I1024 19:52:47.456036   38158 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:52:47.658114   38158 system_pods.go:86] 7 kube-system pods found
	I1024 19:52:47.658141   38158 system_pods.go:89] "coredns-6d4b75cb6d-vrmmb" [a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1] Running
	I1024 19:52:47.658152   38158 system_pods.go:89] "etcd-test-preload-963013" [a94abc9c-a121-4ede-8b07-56296a4ea8c7] Running
	I1024 19:52:47.658158   38158 system_pods.go:89] "kube-apiserver-test-preload-963013" [62a70e30-81eb-44e7-b8a5-c0e8c6f420a0] Running
	I1024 19:52:47.658163   38158 system_pods.go:89] "kube-controller-manager-test-preload-963013" [02bd0782-eb73-481d-b773-5223fc7f8b7c] Running
	I1024 19:52:47.658169   38158 system_pods.go:89] "kube-proxy-hg9gw" [709820ae-b9e4-4c6d-b7bc-88f108fa986b] Running
	I1024 19:52:47.658175   38158 system_pods.go:89] "kube-scheduler-test-preload-963013" [0fe3b5f8-0017-4711-ac1a-0304266ded87] Running
	I1024 19:52:47.658181   38158 system_pods.go:89] "storage-provisioner" [a8ecb3ab-719c-4623-8af1-422fb0a84baf] Running
	I1024 19:52:47.658190   38158 system_pods.go:126] duration metric: took 202.147711ms to wait for k8s-apps to be running ...
	I1024 19:52:47.658203   38158 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:52:47.658251   38158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:52:47.672375   38158 system_svc.go:56] duration metric: took 14.166789ms WaitForService to wait for kubelet.
	I1024 19:52:47.672403   38158 kubeadm.go:581] duration metric: took 11.60556767s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:52:47.672425   38158 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:52:47.856145   38158 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:52:47.856175   38158 node_conditions.go:123] node cpu capacity is 2
	I1024 19:52:47.856184   38158 node_conditions.go:105] duration metric: took 183.755296ms to run NodePressure ...
	I1024 19:52:47.856194   38158 start.go:228] waiting for startup goroutines ...
	I1024 19:52:47.856200   38158 start.go:233] waiting for cluster config update ...
	I1024 19:52:47.856208   38158 start.go:242] writing updated cluster config ...
	I1024 19:52:47.856437   38158 ssh_runner.go:195] Run: rm -f paused
	I1024 19:52:47.903812   38158 start.go:600] kubectl: 1.28.3, cluster: 1.24.4 (minor skew: 4)
	I1024 19:52:47.905729   38158 out.go:177] 
	W1024 19:52:47.907371   38158 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1024 19:52:47.908913   38158 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1024 19:52:47.910522   38158 out.go:177] * Done! kubectl is now configured to use "test-preload-963013" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:51:49 UTC, ends at Tue 2023-10-24 19:52:48 UTC. --
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.871239699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=01826084-07e5-4e67-833f-b41f2dd2be48 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.872777037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=15e4610a-7483-4102-a37e-9f1689844cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.873289210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177168873274330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=15e4610a-7483-4102-a37e-9f1689844cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.873803650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9f6acbb1-af0a-4db0-b8fe-6ff58a8a5046 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.873879073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9f6acbb1-af0a-4db0-b8fe-6ff58a8a5046 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.874093984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d931175f42756b3f0dfcfb894727514d9883f7df811dacd6c4b7412e4c16376e,PodSandboxId:f46549be0fc73f94e3ffc6175d5e1b59468bee065b758a47c80689fc3cb1cf26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1698177158500364329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vrmmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1,},Annotations:map[string]string{io.kubernetes.container.hash: fee88110,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fcb7e4aed4837c157f3698c3d26ca19f6ecb8f7b661efd04359f691ccf3d2,PodSandboxId:9e30fda03cd7df3818e580b6fcf442abda1c0cea14aa685847b690dfbcf5b915,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698177155596179447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a8ecb3ab-719c-4623-8af1-422fb0a84baf,},Annotations:map[string]string{io.kubernetes.container.hash: f6735274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fede5c3ed847de11de7bfa974bedb9d9ef58eda27fc90a45f09231efebe112d,PodSandboxId:20cda9d4d261dfc65c2121ec3fd74d3abdfcb07face63f8fb6033005533f04bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1698177155015154702,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg9gw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
709820ae-b9e4-4c6d-b7bc-88f108fa986b,},Annotations:map[string]string{io.kubernetes.container.hash: a5b04afb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9cdb3c7cf54271c8ba7b41da0cd9a1874cfb392703396209c2b15e1fca5a21,PodSandboxId:c07c90db72365e3f64e32eb857c2d7d99e4701140733b75ccc9993d5ee31fe43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1698177147234503478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6812c138f8
23590929cc617a74bd3946,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a4b6eba0355bc864c5c8bc36161956a57ca5115272b79fa63d81503455151e,PodSandboxId:aa4a6451f25b3fe9bc864a2d0fea868140a48a5760aadbbd83a996d0283ad8d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1698177147186232864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e58df49f371ea0943012da254d88eda,},Annotations:map[string]string
{io.kubernetes.container.hash: 905b19c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b308619426a53f8a220bb096c2ba5f68a229b0c75b1c6c45fea1762b7038f219,PodSandboxId:77cbaaf20efce15c9e551520c2cc712478eae3056f33aa358e74fd5dfe6c1910,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1698177147071455061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45de581640c6aa45dd0bd3f23cf21687,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6eb5e040477e56119a9a4a6c0301604b5fbb367b9bb83f8fc1faafcf8c39ac7,PodSandboxId:3636bd9d6b79b07af9c262bbc25f3f33888f74d2605f1170899d17b508d32f64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1698177146757244086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2160dad1ad4aa2736deaa116b27bd0,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dfd701f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9f6acbb1-af0a-4db0-b8fe-6ff58a8a5046 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.902547035Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=bdcbc9d7-973e-47d0-80c9-4a82b45d5d94 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.902755093Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f46549be0fc73f94e3ffc6175d5e1b59468bee065b758a47c80689fc3cb1cf26,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-vrmmb,Uid:a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177157937922839,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-vrmmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:52:33.673727875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e30fda03cd7df3818e580b6fcf442abda1c0cea14aa685847b690dfbcf5b915,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a8ecb3ab-719c-4623-8af1-422fb0a84baf,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177155215388346,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ecb3ab-719c-4623-8af1-422fb0a84baf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-24T19:52:33.673725609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20cda9d4d261dfc65c2121ec3fd74d3abdfcb07face63f8fb6033005533f04bc,Metadata:&PodSandboxMetadata{Name:kube-proxy-hg9gw,Uid:709820ae-b9e4-4c6d-b7bc-88f108fa986b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177154615747392,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hg9gw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709820ae-b9e4-4c6d-b7bc-88f108fa986b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T19:52:33.673722919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c07c90db72365e3f64e32eb857c2d7d99e4701140733b75ccc9993d5ee31fe43,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-963013,Uid:6812c13
8f823590929cc617a74bd3946,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177146305438461,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6812c138f823590929cc617a74bd3946,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6812c138f823590929cc617a74bd3946,kubernetes.io/config.seen: 2023-10-24T19:52:25.673586961Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa4a6451f25b3fe9bc864a2d0fea868140a48a5760aadbbd83a996d0283ad8d8,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-963013,Uid:4e58df49f371ea0943012da254d88eda,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177146294325421,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
e58df49f371ea0943012da254d88eda,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.204:2379,kubernetes.io/config.hash: 4e58df49f371ea0943012da254d88eda,kubernetes.io/config.seen: 2023-10-24T19:52:25.770188052Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77cbaaf20efce15c9e551520c2cc712478eae3056f33aa358e74fd5dfe6c1910,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-963013,Uid:45de581640c6aa45dd0bd3f23cf21687,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177146282349842,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45de581640c6aa45dd0bd3f23cf21687,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 45de581640c6aa45dd0bd3f23cf21687,kubernetes.io/config.seen: 2023-10-24T19
:52:25.673585693Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3636bd9d6b79b07af9c262bbc25f3f33888f74d2605f1170899d17b508d32f64,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-963013,Uid:4b2160dad1ad4aa2736deaa116b27bd0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698177146243911381,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2160dad1ad4aa2736deaa116b27bd0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.204:8443,kubernetes.io/config.hash: 4b2160dad1ad4aa2736deaa116b27bd0,kubernetes.io/config.seen: 2023-10-24T19:52:25.673567449Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=bdcbc9d7-973e-47d0-80c9-4a82b45d5d94 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.903308432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3b079d2f-ca41-41b9-9001-3e9ceab20d29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.903392274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3b079d2f-ca41-41b9-9001-3e9ceab20d29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.903555080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d931175f42756b3f0dfcfb894727514d9883f7df811dacd6c4b7412e4c16376e,PodSandboxId:f46549be0fc73f94e3ffc6175d5e1b59468bee065b758a47c80689fc3cb1cf26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1698177158500364329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vrmmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1,},Annotations:map[string]string{io.kubernetes.container.hash: fee88110,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fcb7e4aed4837c157f3698c3d26ca19f6ecb8f7b661efd04359f691ccf3d2,PodSandboxId:9e30fda03cd7df3818e580b6fcf442abda1c0cea14aa685847b690dfbcf5b915,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698177155596179447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a8ecb3ab-719c-4623-8af1-422fb0a84baf,},Annotations:map[string]string{io.kubernetes.container.hash: f6735274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fede5c3ed847de11de7bfa974bedb9d9ef58eda27fc90a45f09231efebe112d,PodSandboxId:20cda9d4d261dfc65c2121ec3fd74d3abdfcb07face63f8fb6033005533f04bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1698177155015154702,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg9gw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
709820ae-b9e4-4c6d-b7bc-88f108fa986b,},Annotations:map[string]string{io.kubernetes.container.hash: a5b04afb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9cdb3c7cf54271c8ba7b41da0cd9a1874cfb392703396209c2b15e1fca5a21,PodSandboxId:c07c90db72365e3f64e32eb857c2d7d99e4701140733b75ccc9993d5ee31fe43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1698177147234503478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6812c138f8
23590929cc617a74bd3946,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a4b6eba0355bc864c5c8bc36161956a57ca5115272b79fa63d81503455151e,PodSandboxId:aa4a6451f25b3fe9bc864a2d0fea868140a48a5760aadbbd83a996d0283ad8d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1698177147186232864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e58df49f371ea0943012da254d88eda,},Annotations:map[string]string
{io.kubernetes.container.hash: 905b19c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b308619426a53f8a220bb096c2ba5f68a229b0c75b1c6c45fea1762b7038f219,PodSandboxId:77cbaaf20efce15c9e551520c2cc712478eae3056f33aa358e74fd5dfe6c1910,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1698177147071455061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45de581640c6aa45dd0bd3f23cf21687,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6eb5e040477e56119a9a4a6c0301604b5fbb367b9bb83f8fc1faafcf8c39ac7,PodSandboxId:3636bd9d6b79b07af9c262bbc25f3f33888f74d2605f1170899d17b508d32f64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1698177146757244086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2160dad1ad4aa2736deaa116b27bd0,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dfd701f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3b079d2f-ca41-41b9-9001-3e9ceab20d29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.912717398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=764c6957-d8a2-4e27-8817-85182dad5c1a name=/runtime.v1.RuntimeService/Version
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.912800968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=764c6957-d8a2-4e27-8817-85182dad5c1a name=/runtime.v1.RuntimeService/Version
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.913610050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4b07a56d-ea2f-462d-b36a-da82f54617cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.914163446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177168914145146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=4b07a56d-ea2f-462d-b36a-da82f54617cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.914620308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d8462a9-d6e6-45b7-9f56-5afdf638b33a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.914689618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d8462a9-d6e6-45b7-9f56-5afdf638b33a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.914848774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d931175f42756b3f0dfcfb894727514d9883f7df811dacd6c4b7412e4c16376e,PodSandboxId:f46549be0fc73f94e3ffc6175d5e1b59468bee065b758a47c80689fc3cb1cf26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1698177158500364329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vrmmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1,},Annotations:map[string]string{io.kubernetes.container.hash: fee88110,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fcb7e4aed4837c157f3698c3d26ca19f6ecb8f7b661efd04359f691ccf3d2,PodSandboxId:9e30fda03cd7df3818e580b6fcf442abda1c0cea14aa685847b690dfbcf5b915,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698177155596179447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a8ecb3ab-719c-4623-8af1-422fb0a84baf,},Annotations:map[string]string{io.kubernetes.container.hash: f6735274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fede5c3ed847de11de7bfa974bedb9d9ef58eda27fc90a45f09231efebe112d,PodSandboxId:20cda9d4d261dfc65c2121ec3fd74d3abdfcb07face63f8fb6033005533f04bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1698177155015154702,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg9gw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
709820ae-b9e4-4c6d-b7bc-88f108fa986b,},Annotations:map[string]string{io.kubernetes.container.hash: a5b04afb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9cdb3c7cf54271c8ba7b41da0cd9a1874cfb392703396209c2b15e1fca5a21,PodSandboxId:c07c90db72365e3f64e32eb857c2d7d99e4701140733b75ccc9993d5ee31fe43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1698177147234503478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6812c138f8
23590929cc617a74bd3946,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a4b6eba0355bc864c5c8bc36161956a57ca5115272b79fa63d81503455151e,PodSandboxId:aa4a6451f25b3fe9bc864a2d0fea868140a48a5760aadbbd83a996d0283ad8d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1698177147186232864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e58df49f371ea0943012da254d88eda,},Annotations:map[string]string
{io.kubernetes.container.hash: 905b19c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b308619426a53f8a220bb096c2ba5f68a229b0c75b1c6c45fea1762b7038f219,PodSandboxId:77cbaaf20efce15c9e551520c2cc712478eae3056f33aa358e74fd5dfe6c1910,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1698177147071455061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45de581640c6aa45dd0bd3f23cf21687,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6eb5e040477e56119a9a4a6c0301604b5fbb367b9bb83f8fc1faafcf8c39ac7,PodSandboxId:3636bd9d6b79b07af9c262bbc25f3f33888f74d2605f1170899d17b508d32f64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1698177146757244086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2160dad1ad4aa2736deaa116b27bd0,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dfd701f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d8462a9-d6e6-45b7-9f56-5afdf638b33a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.950865553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bc41bde8-7a7f-4186-b05c-af3fd938db20 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.951004799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bc41bde8-7a7f-4186-b05c-af3fd938db20 name=/runtime.v1.RuntimeService/Version
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.952209578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c2d9f9ba-c5ec-40a2-8597-08096f345890 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.952665308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177168952651625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=c2d9f9ba-c5ec-40a2-8597-08096f345890 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.953299401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a765ce3c-d64e-4645-b695-68975143015c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.953366621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a765ce3c-d64e-4645-b695-68975143015c name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 19:52:48 test-preload-963013 crio[712]: time="2023-10-24 19:52:48.953517428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d931175f42756b3f0dfcfb894727514d9883f7df811dacd6c4b7412e4c16376e,PodSandboxId:f46549be0fc73f94e3ffc6175d5e1b59468bee065b758a47c80689fc3cb1cf26,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1698177158500364329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vrmmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1,},Annotations:map[string]string{io.kubernetes.container.hash: fee88110,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5fcb7e4aed4837c157f3698c3d26ca19f6ecb8f7b661efd04359f691ccf3d2,PodSandboxId:9e30fda03cd7df3818e580b6fcf442abda1c0cea14aa685847b690dfbcf5b915,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698177155596179447,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: a8ecb3ab-719c-4623-8af1-422fb0a84baf,},Annotations:map[string]string{io.kubernetes.container.hash: f6735274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fede5c3ed847de11de7bfa974bedb9d9ef58eda27fc90a45f09231efebe112d,PodSandboxId:20cda9d4d261dfc65c2121ec3fd74d3abdfcb07face63f8fb6033005533f04bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1698177155015154702,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg9gw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
709820ae-b9e4-4c6d-b7bc-88f108fa986b,},Annotations:map[string]string{io.kubernetes.container.hash: a5b04afb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9cdb3c7cf54271c8ba7b41da0cd9a1874cfb392703396209c2b15e1fca5a21,PodSandboxId:c07c90db72365e3f64e32eb857c2d7d99e4701140733b75ccc9993d5ee31fe43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1698177147234503478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6812c138f8
23590929cc617a74bd3946,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a4b6eba0355bc864c5c8bc36161956a57ca5115272b79fa63d81503455151e,PodSandboxId:aa4a6451f25b3fe9bc864a2d0fea868140a48a5760aadbbd83a996d0283ad8d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1698177147186232864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e58df49f371ea0943012da254d88eda,},Annotations:map[string]string
{io.kubernetes.container.hash: 905b19c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b308619426a53f8a220bb096c2ba5f68a229b0c75b1c6c45fea1762b7038f219,PodSandboxId:77cbaaf20efce15c9e551520c2cc712478eae3056f33aa358e74fd5dfe6c1910,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1698177147071455061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45de581640c6aa45dd0bd3f23cf21687,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6eb5e040477e56119a9a4a6c0301604b5fbb367b9bb83f8fc1faafcf8c39ac7,PodSandboxId:3636bd9d6b79b07af9c262bbc25f3f33888f74d2605f1170899d17b508d32f64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1698177146757244086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-963013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2160dad1ad4aa2736deaa116b27bd0,},Annotations:map[strin
g]string{io.kubernetes.container.hash: dfd701f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a765ce3c-d64e-4645-b695-68975143015c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d931175f42756       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   10 seconds ago      Running             coredns                   1                   f46549be0fc73       coredns-6d4b75cb6d-vrmmb
	dd5fcb7e4aed4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   9e30fda03cd7d       storage-provisioner
	4fede5c3ed847       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   20cda9d4d261d       kube-proxy-hg9gw
	6f9cdb3c7cf54       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   c07c90db72365       kube-scheduler-test-preload-963013
	32a4b6eba0355       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   aa4a6451f25b3       etcd-test-preload-963013
	b308619426a53       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   77cbaaf20efce       kube-controller-manager-test-preload-963013
	f6eb5e040477e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   3636bd9d6b79b       kube-apiserver-test-preload-963013
	
	* 
	* ==> coredns [d931175f42756b3f0dfcfb894727514d9883f7df811dacd6c4b7412e4c16376e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50124 - 24190 "HINFO IN 667921810087772598.4689218309463312843. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011367457s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-963013
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-963013
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=test-preload-963013
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_51_06_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:51:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-963013
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:52:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:52:43 +0000   Tue, 24 Oct 2023 19:50:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:52:43 +0000   Tue, 24 Oct 2023 19:50:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:52:43 +0000   Tue, 24 Oct 2023 19:50:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:52:43 +0000   Tue, 24 Oct 2023 19:52:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    test-preload-963013
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 051a9c2dc7a14781abe7bbf1a31424fa
	  System UUID:                051a9c2d-c7a1-4781-abe7-bbf1a31424fa
	  Boot ID:                    47ea75c8-225d-43c8-918d-b61648d62b2f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vrmmb                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     91s
	  kube-system                 etcd-test-preload-963013                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-test-preload-963013             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-test-preload-963013    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-hg9gw                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-test-preload-963013             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet          Node test-preload-963013 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet          Node test-preload-963013 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet          Node test-preload-963013 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node test-preload-963013 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node test-preload-963013 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node test-preload-963013 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                93s                  kubelet          Node test-preload-963013 status is now: NodeReady
	  Normal  RegisteredNode           91s                  node-controller  Node test-preload-963013 event: Registered Node test-preload-963013 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-963013 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-963013 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-963013 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-963013 event: Registered Node test-preload-963013 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 19:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066949] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.332255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.395675] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143920] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.670914] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.148741] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.102664] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.141222] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.102700] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.202212] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Oct24 19:52] systemd-fstab-generator[1091]: Ignoring "noauto" for root device
	[  +9.863194] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.108228] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [32a4b6eba0355bc864c5c8bc36161956a57ca5115272b79fa63d81503455151e] <==
	* {"level":"info","ts":"2023-10-24T19:52:29.277Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7dd4abf80c2dae76","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-10-24T19:52:29.281Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-10-24T19:52:29.285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 switched to configuration voters=(9067061031648210550)"}
	{"level":"info","ts":"2023-10-24T19:52:29.285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ae97a28c245b4e6c","local-member-id":"7dd4abf80c2dae76","added-peer-id":"7dd4abf80c2dae76","added-peer-peer-urls":["https://192.168.39.204:2380"]}
	{"level":"info","ts":"2023-10-24T19:52:29.285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae97a28c245b4e6c","local-member-id":"7dd4abf80c2dae76","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:52:29.285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:52:29.305Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T19:52:29.305Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7dd4abf80c2dae76","initial-advertise-peer-urls":["https://192.168.39.204:2380"],"listen-peer-urls":["https://192.168.39.204:2380"],"advertise-client-urls":["https://192.168.39.204:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.204:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T19:52:29.306Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.204:2380"}
	{"level":"info","ts":"2023-10-24T19:52:29.306Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.204:2380"}
	{"level":"info","ts":"2023-10-24T19:52:29.306Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 received MsgPreVoteResp from 7dd4abf80c2dae76 at term 2"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 received MsgVoteResp from 7dd4abf80c2dae76 at term 3"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7dd4abf80c2dae76 became leader at term 3"}
	{"level":"info","ts":"2023-10-24T19:52:30.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7dd4abf80c2dae76 elected leader 7dd4abf80c2dae76 at term 3"}
	{"level":"info","ts":"2023-10-24T19:52:30.428Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7dd4abf80c2dae76","local-member-attributes":"{Name:test-preload-963013 ClientURLs:[https://192.168.39.204:2379]}","request-path":"/0/members/7dd4abf80c2dae76/attributes","cluster-id":"ae97a28c245b4e6c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:52:30.428Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:52:30.429Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:52:30.429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:52:30.429Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:52:30.430Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.204:2379"}
	{"level":"info","ts":"2023-10-24T19:52:30.432Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  19:52:49 up 1 min,  0 users,  load average: 1.54, 0.45, 0.16
	Linux test-preload-963013 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f6eb5e040477e56119a9a4a6c0301604b5fbb367b9bb83f8fc1faafcf8c39ac7] <==
	* I1024 19:52:32.817302       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1024 19:52:32.817334       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1024 19:52:32.817429       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1024 19:52:32.836788       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1024 19:52:32.867879       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1024 19:52:32.870260       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1024 19:52:32.970417       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E1024 19:52:32.982360       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1024 19:52:32.996690       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1024 19:52:33.004373       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1024 19:52:33.004475       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:52:33.012038       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:52:33.012699       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:52:33.022033       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1024 19:52:33.046103       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:52:33.492170       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1024 19:52:33.805555       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:52:34.559545       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1024 19:52:34.569336       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1024 19:52:34.611241       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1024 19:52:34.641178       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:52:34.647562       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1024 19:52:35.360818       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1024 19:52:45.314201       1 controller.go:611] quota admission added evaluator for: endpoints
	I1024 19:52:45.418692       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [b308619426a53f8a220bb096c2ba5f68a229b0c75b1c6c45fea1762b7038f219] <==
	* I1024 19:52:45.343789       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1024 19:52:45.343796       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1024 19:52:45.349190       1 shared_informer.go:262] Caches are synced for GC
	I1024 19:52:45.351433       1 shared_informer.go:262] Caches are synced for daemon sets
	I1024 19:52:45.354312       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1024 19:52:45.356668       1 shared_informer.go:262] Caches are synced for service account
	I1024 19:52:45.371299       1 shared_informer.go:262] Caches are synced for namespace
	I1024 19:52:45.393359       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1024 19:52:45.409453       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1024 19:52:45.434048       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1024 19:52:45.478306       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1024 19:52:45.479531       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1024 19:52:45.479598       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1024 19:52:45.479621       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1024 19:52:45.493273       1 shared_informer.go:262] Caches are synced for attach detach
	I1024 19:52:45.501448       1 shared_informer.go:262] Caches are synced for PVC protection
	I1024 19:52:45.505812       1 shared_informer.go:262] Caches are synced for ephemeral
	I1024 19:52:45.516538       1 shared_informer.go:262] Caches are synced for resource quota
	I1024 19:52:45.530137       1 shared_informer.go:262] Caches are synced for persistent volume
	I1024 19:52:45.538430       1 shared_informer.go:262] Caches are synced for stateful set
	I1024 19:52:45.556487       1 shared_informer.go:262] Caches are synced for resource quota
	I1024 19:52:45.561799       1 shared_informer.go:262] Caches are synced for expand
	I1024 19:52:45.973907       1 shared_informer.go:262] Caches are synced for garbage collector
	I1024 19:52:45.974006       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1024 19:52:45.987617       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [4fede5c3ed847de11de7bfa974bedb9d9ef58eda27fc90a45f09231efebe112d] <==
	* I1024 19:52:35.265676       1 node.go:163] Successfully retrieved node IP: 192.168.39.204
	I1024 19:52:35.265795       1 server_others.go:138] "Detected node IP" address="192.168.39.204"
	I1024 19:52:35.265835       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1024 19:52:35.351712       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1024 19:52:35.351798       1 server_others.go:206] "Using iptables Proxier"
	I1024 19:52:35.351836       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1024 19:52:35.352164       1 server.go:661] "Version info" version="v1.24.4"
	I1024 19:52:35.352216       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:52:35.353841       1 config.go:317] "Starting service config controller"
	I1024 19:52:35.353900       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1024 19:52:35.354037       1 config.go:226] "Starting endpoint slice config controller"
	I1024 19:52:35.354069       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1024 19:52:35.357569       1 config.go:444] "Starting node config controller"
	I1024 19:52:35.357620       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1024 19:52:35.455205       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1024 19:52:35.455284       1 shared_informer.go:262] Caches are synced for service config
	I1024 19:52:35.457807       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6f9cdb3c7cf54271c8ba7b41da0cd9a1874cfb392703396209c2b15e1fca5a21] <==
	* I1024 19:52:29.801874       1 serving.go:348] Generated self-signed cert in-memory
	W1024 19:52:32.961069       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 19:52:32.961221       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:52:32.961236       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 19:52:32.961244       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 19:52:32.998222       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1024 19:52:32.998285       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:52:33.007096       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1024 19:52:33.009148       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:52:33.016544       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:52:33.028397       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:52:33.129657       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:51:49 UTC, ends at Tue 2023-10-24 19:52:49 UTC. --
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: E1024 19:52:33.679260    1097 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vrmmb" podUID=a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.741892    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/709820ae-b9e4-4c6d-b7bc-88f108fa986b-kube-proxy\") pod \"kube-proxy-hg9gw\" (UID: \"709820ae-b9e4-4c6d-b7bc-88f108fa986b\") " pod="kube-system/kube-proxy-hg9gw"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.741988    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709820ae-b9e4-4c6d-b7bc-88f108fa986b-xtables-lock\") pod \"kube-proxy-hg9gw\" (UID: \"709820ae-b9e4-4c6d-b7bc-88f108fa986b\") " pod="kube-system/kube-proxy-hg9gw"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742016    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj96\" (UniqueName: \"kubernetes.io/projected/709820ae-b9e4-4c6d-b7bc-88f108fa986b-kube-api-access-dsj96\") pod \"kube-proxy-hg9gw\" (UID: \"709820ae-b9e4-4c6d-b7bc-88f108fa986b\") " pod="kube-system/kube-proxy-hg9gw"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742037    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume\") pod \"coredns-6d4b75cb6d-vrmmb\" (UID: \"a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1\") " pod="kube-system/coredns-6d4b75cb6d-vrmmb"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742056    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbzlf\" (UniqueName: \"kubernetes.io/projected/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-kube-api-access-xbzlf\") pod \"coredns-6d4b75cb6d-vrmmb\" (UID: \"a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1\") " pod="kube-system/coredns-6d4b75cb6d-vrmmb"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742074    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a8ecb3ab-719c-4623-8af1-422fb0a84baf-tmp\") pod \"storage-provisioner\" (UID: \"a8ecb3ab-719c-4623-8af1-422fb0a84baf\") " pod="kube-system/storage-provisioner"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742099    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709820ae-b9e4-4c6d-b7bc-88f108fa986b-lib-modules\") pod \"kube-proxy-hg9gw\" (UID: \"709820ae-b9e4-4c6d-b7bc-88f108fa986b\") " pod="kube-system/kube-proxy-hg9gw"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742116    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxvfc\" (UniqueName: \"kubernetes.io/projected/a8ecb3ab-719c-4623-8af1-422fb0a84baf-kube-api-access-pxvfc\") pod \"storage-provisioner\" (UID: \"a8ecb3ab-719c-4623-8af1-422fb0a84baf\") " pod="kube-system/storage-provisioner"
	Oct 24 19:52:33 test-preload-963013 kubelet[1097]: I1024 19:52:33.742130    1097 reconciler.go:159] "Reconciler: start to sync state"
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: I1024 19:52:34.214343    1097 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwgkn\" (UniqueName: \"kubernetes.io/projected/4e4411e1-4dc8-4424-abcd-567f211631dd-kube-api-access-xwgkn\") pod \"4e4411e1-4dc8-4424-abcd-567f211631dd\" (UID: \"4e4411e1-4dc8-4424-abcd-567f211631dd\") "
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: I1024 19:52:34.214385    1097 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4411e1-4dc8-4424-abcd-567f211631dd-config-volume\") pod \"4e4411e1-4dc8-4424-abcd-567f211631dd\" (UID: \"4e4411e1-4dc8-4424-abcd-567f211631dd\") "
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: E1024 19:52:34.214825    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: E1024 19:52:34.214982    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume podName:a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1 nodeName:}" failed. No retries permitted until 2023-10-24 19:52:34.714905261 +0000 UTC m=+9.167431499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume") pod "coredns-6d4b75cb6d-vrmmb" (UID: "a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1") : object "kube-system"/"coredns" not registered
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: W1024 19:52:34.216163    1097 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4e4411e1-4dc8-4424-abcd-567f211631dd/volumes/kubernetes.io~projected/kube-api-access-xwgkn: clearQuota called, but quotas disabled
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: W1024 19:52:34.216179    1097 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4e4411e1-4dc8-4424-abcd-567f211631dd/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: I1024 19:52:34.216551    1097 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4411e1-4dc8-4424-abcd-567f211631dd-kube-api-access-xwgkn" (OuterVolumeSpecName: "kube-api-access-xwgkn") pod "4e4411e1-4dc8-4424-abcd-567f211631dd" (UID: "4e4411e1-4dc8-4424-abcd-567f211631dd"). InnerVolumeSpecName "kube-api-access-xwgkn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: I1024 19:52:34.216775    1097 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e4411e1-4dc8-4424-abcd-567f211631dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e4411e1-4dc8-4424-abcd-567f211631dd" (UID: "4e4411e1-4dc8-4424-abcd-567f211631dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: I1024 19:52:34.315673    1097 reconciler.go:384] "Volume detached for volume \"kube-api-access-xwgkn\" (UniqueName: \"kubernetes.io/projected/4e4411e1-4dc8-4424-abcd-567f211631dd-kube-api-access-xwgkn\") on node \"test-preload-963013\" DevicePath \"\""
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: I1024 19:52:34.315753    1097 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4411e1-4dc8-4424-abcd-567f211631dd-config-volume\") on node \"test-preload-963013\" DevicePath \"\""
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: E1024 19:52:34.719871    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 24 19:52:34 test-preload-963013 kubelet[1097]: E1024 19:52:34.720008    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume podName:a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1 nodeName:}" failed. No retries permitted until 2023-10-24 19:52:35.719993497 +0000 UTC m=+10.172519723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume") pod "coredns-6d4b75cb6d-vrmmb" (UID: "a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1") : object "kube-system"/"coredns" not registered
	Oct 24 19:52:35 test-preload-963013 kubelet[1097]: E1024 19:52:35.726094    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 24 19:52:35 test-preload-963013 kubelet[1097]: E1024 19:52:35.726151    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume podName:a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1 nodeName:}" failed. No retries permitted until 2023-10-24 19:52:37.726138202 +0000 UTC m=+12.178664428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1-config-volume") pod "coredns-6d4b75cb6d-vrmmb" (UID: "a08ab93c-65fa-4e3d-b9d0-00a00d8cb4a1") : object "kube-system"/"coredns" not registered
	Oct 24 19:52:37 test-preload-963013 kubelet[1097]: I1024 19:52:37.814563    1097 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4e4411e1-4dc8-4424-abcd-567f211631dd path="/var/lib/kubelet/pods/4e4411e1-4dc8-4424-abcd-567f211631dd/volumes"
	
	* 
	* ==> storage-provisioner [dd5fcb7e4aed4837c157f3698c3d26ca19f6ecb8f7b661efd04359f691ccf3d2] <==
	* I1024 19:52:35.712257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:52:35.727129       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:52:35.727218       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-963013 -n test-preload-963013
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-963013 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-963013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-963013
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-963013: (1.099247563s)
--- FAIL: TestPreload (185.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (168.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3738485795.exe start -p running-upgrade-880777 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1024 19:56:00.584722   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3738485795.exe start -p running-upgrade-880777 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m15.745834129s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-880777 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-880777 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (30.330073515s)

                                                
                                                
-- stdout --
	* [running-upgrade-880777] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-880777 in cluster running-upgrade-880777
	* Updating the running kvm2 "running-upgrade-880777" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:57:03.924713   41427 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:57:03.924992   41427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:57:03.925002   41427 out.go:309] Setting ErrFile to fd 2...
	I1024 19:57:03.925007   41427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:57:03.925169   41427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:57:03.925719   41427 out.go:303] Setting JSON to false
	I1024 19:57:03.926638   41427 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5722,"bootTime":1698171702,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:57:03.926708   41427 start.go:138] virtualization: kvm guest
	I1024 19:57:03.929159   41427 out.go:177] * [running-upgrade-880777] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:57:03.930917   41427 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:57:03.932847   41427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:57:03.930930   41427 notify.go:220] Checking for updates...
	I1024 19:57:03.935797   41427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:57:03.937337   41427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:57:03.938688   41427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:57:03.940404   41427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:57:03.942405   41427 config.go:182] Loaded profile config "running-upgrade-880777": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 19:57:03.942426   41427 start_flags.go:689] config upgrade: Driver=kvm2
	I1024 19:57:03.942438   41427 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:57:03.942530   41427 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/running-upgrade-880777/config.json ...
	I1024 19:57:03.943157   41427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:57:03.943220   41427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:03.957831   41427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I1024 19:57:03.958193   41427 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:03.958825   41427 main.go:141] libmachine: Using API Version  1
	I1024 19:57:03.958855   41427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:03.959210   41427 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:03.959439   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:03.961467   41427 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 19:57:03.963051   41427 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:57:03.963363   41427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:57:03.963405   41427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:03.978955   41427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I1024 19:57:03.979482   41427 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:03.980029   41427 main.go:141] libmachine: Using API Version  1
	I1024 19:57:03.980064   41427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:03.980365   41427 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:03.980621   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:04.014066   41427 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:57:04.015483   41427 start.go:298] selected driver: kvm2
	I1024 19:57:04.015497   41427 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-880777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.95 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 19:57:04.015612   41427 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:57:04.016276   41427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.016373   41427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:57:04.031377   41427 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:57:04.031837   41427 cni.go:84] Creating CNI manager for ""
	I1024 19:57:04.031869   41427 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1024 19:57:04.031878   41427 start_flags.go:323] config:
	{Name:running-upgrade-880777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.95 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 19:57:04.032121   41427 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.034163   41427 out.go:177] * Starting control plane node running-upgrade-880777 in cluster running-upgrade-880777
	I1024 19:57:04.035983   41427 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1024 19:57:04.059067   41427 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1024 19:57:04.059201   41427 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/running-upgrade-880777/config.json ...
	I1024 19:57:04.059377   41427 cache.go:107] acquiring lock: {Name:mk8513e8168be955085f93d6b32fea4f84fc85b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.059408   41427 cache.go:107] acquiring lock: {Name:mkabe9509a46135b02bebc423eb505c49f9ff3ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.059415   41427 cache.go:107] acquiring lock: {Name:mk735c77cd83accc7a1217449f6c44ae25f80f3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.059470   41427 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 19:57:04.059463   41427 start.go:365] acquiring machines lock for running-upgrade-880777: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:57:04.059482   41427 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.431µs
	I1024 19:57:04.059503   41427 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 19:57:04.059378   41427 cache.go:107] acquiring lock: {Name:mk2ae1454cdcf98345f417f995dfd08b4018871a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.059501   41427 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 19:57:04.059525   41427 cache.go:107] acquiring lock: {Name:mk11b35d63755e95e387e3ac6d6cba0a04d43849 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.059549   41427 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1024 19:57:04.059568   41427 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1024 19:57:04.059534   41427 cache.go:107] acquiring lock: {Name:mk64b100caaf53cfecea06c9a7cbc7fd3a7c24bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.060085   41427 cache.go:107] acquiring lock: {Name:mk3cadfd39c1532c7ddff0d0b209d1625b153615 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.060246   41427 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1024 19:57:04.060255   41427 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1024 19:57:04.060239   41427 cache.go:107] acquiring lock: {Name:mk800859e863c9c721aef96ed800bfd37a969241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:57:04.060335   41427 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1024 19:57:04.061010   41427 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1024 19:57:04.061026   41427 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1024 19:57:04.061113   41427 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1024 19:57:04.061404   41427 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:57:04.061833   41427 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 19:57:04.062667   41427 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1024 19:57:04.062808   41427 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:57:04.063132   41427 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1024 19:57:04.220941   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 19:57:04.222982   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1024 19:57:04.241764   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1024 19:57:04.263119   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1024 19:57:04.271293   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1024 19:57:04.276398   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1024 19:57:04.278977   41427 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1024 19:57:04.323967   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1024 19:57:04.324058   41427 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 264.657815ms
	I1024 19:57:04.324105   41427 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1024 19:57:04.988356   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1024 19:57:04.988385   41427 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 928.88605ms
	I1024 19:57:04.988402   41427 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1024 19:57:05.415239   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1024 19:57:05.415264   41427 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.35585153s
	I1024 19:57:05.415274   41427 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1024 19:57:05.508030   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1024 19:57:05.508056   41427 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.448689757s
	I1024 19:57:05.508070   41427 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1024 19:57:05.943393   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1024 19:57:05.943425   41427 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.883938817s
	I1024 19:57:05.943440   41427 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1024 19:57:06.146638   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1024 19:57:06.146664   41427 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.086650168s
	I1024 19:57:06.146675   41427 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1024 19:57:06.575764   41427 cache.go:157] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1024 19:57:06.575800   41427 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.515567241s
	I1024 19:57:06.575816   41427 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1024 19:57:06.575826   41427 cache.go:87] Successfully saved all images to host disk.
	I1024 19:57:30.422237   41427 start.go:369] acquired machines lock for "running-upgrade-880777" in 26.362732841s
	I1024 19:57:30.422299   41427 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:57:30.422308   41427 fix.go:54] fixHost starting: minikube
	I1024 19:57:30.422726   41427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:57:30.422764   41427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:30.441877   41427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1024 19:57:30.442247   41427 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:30.442679   41427 main.go:141] libmachine: Using API Version  1
	I1024 19:57:30.442705   41427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:30.443110   41427 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:30.443294   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:30.443435   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetState
	I1024 19:57:30.445007   41427 fix.go:102] recreateIfNeeded on running-upgrade-880777: state=Running err=<nil>
	W1024 19:57:30.445031   41427 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:57:30.447368   41427 out.go:177] * Updating the running kvm2 "running-upgrade-880777" VM ...
	I1024 19:57:30.448886   41427 machine.go:88] provisioning docker machine ...
	I1024 19:57:30.448905   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:30.449104   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetMachineName
	I1024 19:57:30.449259   41427 buildroot.go:166] provisioning hostname "running-upgrade-880777"
	I1024 19:57:30.449282   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetMachineName
	I1024 19:57:30.449471   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:30.451978   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.452524   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:30.452557   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.452701   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:30.452868   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:30.452980   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:30.453121   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:30.453252   41427 main.go:141] libmachine: Using SSH client type: native
	I1024 19:57:30.453794   41427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I1024 19:57:30.453811   41427 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-880777 && echo "running-upgrade-880777" | sudo tee /etc/hostname
	I1024 19:57:30.585847   41427 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-880777
	
	I1024 19:57:30.585883   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:30.589098   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.589627   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:30.589665   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.589877   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:30.590080   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:30.590255   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:30.590396   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:30.590592   41427 main.go:141] libmachine: Using SSH client type: native
	I1024 19:57:30.590960   41427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I1024 19:57:30.590992   41427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-880777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-880777/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-880777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:57:30.710639   41427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:57:30.710662   41427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 19:57:30.710685   41427 buildroot.go:174] setting up certificates
	I1024 19:57:30.710693   41427 provision.go:83] configureAuth start
	I1024 19:57:30.710702   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetMachineName
	I1024 19:57:30.710952   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetIP
	I1024 19:57:30.713994   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.714442   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:30.714474   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.714679   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:30.717260   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.717679   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:30.717716   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:30.717817   41427 provision.go:138] copyHostCerts
	I1024 19:57:30.717871   41427 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 19:57:30.717883   41427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 19:57:30.717949   41427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 19:57:30.718057   41427 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 19:57:30.718067   41427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 19:57:30.718094   41427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 19:57:30.718171   41427 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 19:57:30.718180   41427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 19:57:30.718206   41427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 19:57:30.718273   41427 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-880777 san=[192.168.50.95 192.168.50.95 localhost 127.0.0.1 minikube running-upgrade-880777]
	I1024 19:57:31.206456   41427 provision.go:172] copyRemoteCerts
	I1024 19:57:31.206543   41427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:57:31.206579   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:31.209545   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:31.209938   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:31.209978   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:31.210202   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:31.210407   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:31.210573   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:31.210716   41427 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/running-upgrade-880777/id_rsa Username:docker}
	I1024 19:57:31.304035   41427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 19:57:31.320604   41427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 19:57:31.336479   41427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:57:31.352978   41427 provision.go:86] duration metric: configureAuth took 642.268588ms
	I1024 19:57:31.353006   41427 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:57:31.353207   41427 config.go:182] Loaded profile config "running-upgrade-880777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 19:57:31.353287   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:31.356547   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:31.356956   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:31.356988   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:31.357208   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:31.357441   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:31.357606   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:31.357739   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:31.357913   41427 main.go:141] libmachine: Using SSH client type: native
	I1024 19:57:31.358323   41427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I1024 19:57:31.358349   41427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:57:32.127271   41427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:57:32.127298   41427 machine.go:91] provisioned docker machine in 1.678400382s
	I1024 19:57:32.127311   41427 start.go:300] post-start starting for "running-upgrade-880777" (driver="kvm2")
	I1024 19:57:32.127324   41427 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:57:32.127350   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:32.127704   41427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:57:32.127739   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:32.131198   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.131645   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:32.131681   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.131927   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:32.132134   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:32.132310   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:32.132505   41427 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/running-upgrade-880777/id_rsa Username:docker}
	I1024 19:57:32.226178   41427 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:57:32.232287   41427 info.go:137] Remote host: Buildroot 2019.02.7
	I1024 19:57:32.232313   41427 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 19:57:32.232397   41427 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 19:57:32.232498   41427 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 19:57:32.232614   41427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:57:32.239048   41427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 19:57:32.256897   41427 start.go:303] post-start completed in 129.573498ms
	I1024 19:57:32.256917   41427 fix.go:56] fixHost completed within 1.834611509s
	I1024 19:57:32.256939   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:32.260215   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.260711   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:32.260738   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.260932   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:32.261116   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:32.261264   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:32.261408   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:32.261624   41427 main.go:141] libmachine: Using SSH client type: native
	I1024 19:57:32.262157   41427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.95 22 <nil> <nil>}
	I1024 19:57:32.262181   41427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1024 19:57:32.383284   41427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698177452.379517653
	
	I1024 19:57:32.383305   41427 fix.go:206] guest clock: 1698177452.379517653
	I1024 19:57:32.383314   41427 fix.go:219] Guest: 2023-10-24 19:57:32.379517653 +0000 UTC Remote: 2023-10-24 19:57:32.256920368 +0000 UTC m=+28.384254632 (delta=122.597285ms)
	I1024 19:57:32.383336   41427 fix.go:190] guest clock delta is within tolerance: 122.597285ms
	I1024 19:57:32.383343   41427 start.go:83] releasing machines lock for "running-upgrade-880777", held for 1.961076816s
	I1024 19:57:32.383369   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:32.383554   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetIP
	I1024 19:57:32.386751   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.387113   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:32.387143   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.387293   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:32.387850   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:32.388064   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .DriverName
	I1024 19:57:32.388135   41427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:57:32.388185   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:32.388544   41427 ssh_runner.go:195] Run: cat /version.json
	I1024 19:57:32.388562   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHHostname
	I1024 19:57:32.391899   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.392206   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:32.392232   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.392399   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:32.392543   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:32.392628   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:32.392703   41427 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/running-upgrade-880777/id_rsa Username:docker}
	I1024 19:57:32.393360   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.393598   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:57:c9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:55:18 +0000 UTC Type:0 Mac:52:54:00:10:57:c9 Iaid: IPaddr:192.168.50.95 Prefix:24 Hostname:running-upgrade-880777 Clientid:01:52:54:00:10:57:c9}
	I1024 19:57:32.393650   41427 main.go:141] libmachine: (running-upgrade-880777) DBG | domain running-upgrade-880777 has defined IP address 192.168.50.95 and MAC address 52:54:00:10:57:c9 in network minikube-net
	I1024 19:57:32.393855   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHPort
	I1024 19:57:32.393989   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHKeyPath
	I1024 19:57:32.394113   41427 main.go:141] libmachine: (running-upgrade-880777) Calling .GetSSHUsername
	I1024 19:57:32.394229   41427 sshutil.go:53] new ssh client: &{IP:192.168.50.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/running-upgrade-880777/id_rsa Username:docker}
	W1024 19:57:32.503930   41427 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 19:57:32.504012   41427 ssh_runner.go:195] Run: systemctl --version
	I1024 19:57:32.515019   41427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:57:32.606174   41427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:57:32.612928   41427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:57:32.613007   41427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:57:32.620586   41427 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 19:57:32.620606   41427 start.go:472] detecting cgroup driver to use...
	I1024 19:57:32.620657   41427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:57:32.636518   41427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:57:32.651171   41427 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:57:32.651220   41427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:57:32.663257   41427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:57:32.675456   41427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 19:57:32.686900   41427 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 19:57:32.686971   41427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:57:32.865486   41427 docker.go:214] disabling docker service ...
	I1024 19:57:32.865555   41427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:57:33.891641   41427 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.026057276s)
	I1024 19:57:33.891708   41427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:57:33.906630   41427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:57:34.013949   41427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:57:34.154867   41427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:57:34.167222   41427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:57:34.179081   41427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 19:57:34.179132   41427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:57:34.187872   41427 out.go:177] 
	W1024 19:57:34.189463   41427 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 19:57:34.189489   41427 out.go:239] * 
	* 
	W1024 19:57:34.190402   41427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:57:34.192274   41427 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-880777 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-24 19:57:34.213706616 +0000 UTC m=+3415.979185609
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-880777 -n running-upgrade-880777
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-880777 -n running-upgrade-880777: exit status 4 (274.51362ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:57:34.454349   41930 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-880777" does not appear in /home/jenkins/minikube-integration/17485-9023/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-880777" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-880777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-880777
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-880777: (1.589055271s)
--- FAIL: TestRunningBinaryUpgrade (168.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (287.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1218745114.exe start -p stopped-upgrade-145190 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1218745114.exe start -p stopped-upgrade-145190 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.4409166s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1218745114.exe -p stopped-upgrade-145190 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1218745114.exe -p stopped-upgrade-145190 stop: (1m32.717531535s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-145190 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-145190 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m7.731086259s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-145190] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-145190 in cluster stopped-upgrade-145190
	* Restarting existing kvm2 VM for "stopped-upgrade-145190" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:00:11.718350   45898 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:00:11.718635   45898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:11.718645   45898 out.go:309] Setting ErrFile to fd 2...
	I1024 20:00:11.718650   45898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:11.718840   45898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:00:11.719359   45898 out.go:303] Setting JSON to false
	I1024 20:00:11.720260   45898 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5910,"bootTime":1698171702,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:00:11.720317   45898 start.go:138] virtualization: kvm guest
	I1024 20:00:11.723230   45898 out.go:177] * [stopped-upgrade-145190] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:00:11.724704   45898 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:00:11.724661   45898 notify.go:220] Checking for updates...
	I1024 20:00:11.726300   45898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:00:11.728320   45898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:00:11.730300   45898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:00:11.731754   45898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:00:11.733292   45898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:00:11.735766   45898 config.go:182] Loaded profile config "stopped-upgrade-145190": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 20:00:11.735786   45898 start_flags.go:689] config upgrade: Driver=kvm2
	I1024 20:00:11.735799   45898 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 20:00:11.735904   45898 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/stopped-upgrade-145190/config.json ...
	I1024 20:00:11.736660   45898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:00:11.736717   45898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:00:11.751856   45898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
	I1024 20:00:11.752262   45898 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:00:11.752878   45898 main.go:141] libmachine: Using API Version  1
	I1024 20:00:11.752899   45898 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:00:11.753227   45898 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:00:11.753468   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:00:11.755523   45898 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:00:11.756670   45898 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:00:11.756999   45898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:00:11.757071   45898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:00:11.771154   45898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1024 20:00:11.771604   45898 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:00:11.772138   45898 main.go:141] libmachine: Using API Version  1
	I1024 20:00:11.772169   45898 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:00:11.772490   45898 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:00:11.772688   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:00:11.809187   45898 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:00:11.811215   45898 start.go:298] selected driver: kvm2
	I1024 20:00:11.811234   45898 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-145190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.31 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:00:11.811375   45898 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:00:11.812284   45898 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.812348   45898 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:00:11.827054   45898 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:00:11.827366   45898 cni.go:84] Creating CNI manager for ""
	I1024 20:00:11.827385   45898 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1024 20:00:11.827393   45898 start_flags.go:323] config:
	{Name:stopped-upgrade-145190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.31 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:00:11.827545   45898 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.829359   45898 out.go:177] * Starting control plane node stopped-upgrade-145190 in cluster stopped-upgrade-145190
	I1024 20:00:11.830835   45898 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1024 20:00:11.858936   45898 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1024 20:00:11.859095   45898 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/stopped-upgrade-145190/config.json ...
	I1024 20:00:11.859168   45898 cache.go:107] acquiring lock: {Name:mk8513e8168be955085f93d6b32fea4f84fc85b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859232   45898 cache.go:107] acquiring lock: {Name:mkabe9509a46135b02bebc423eb505c49f9ff3ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859227   45898 cache.go:107] acquiring lock: {Name:mk735c77cd83accc7a1217449f6c44ae25f80f3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859298   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 20:00:11.859304   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1024 20:00:11.859316   45898 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 161.235µs
	I1024 20:00:11.859316   45898 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 85.227µs
	I1024 20:00:11.859332   45898 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 20:00:11.859337   45898 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1024 20:00:11.859176   45898 cache.go:107] acquiring lock: {Name:mk800859e863c9c721aef96ed800bfd37a969241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859202   45898 cache.go:107] acquiring lock: {Name:mk3cadfd39c1532c7ddff0d0b209d1625b153615 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859264   45898 cache.go:107] acquiring lock: {Name:mk11b35d63755e95e387e3ac6d6cba0a04d43849 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859375   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1024 20:00:11.859384   45898 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 158.928µs
	I1024 20:00:11.859393   45898 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1024 20:00:11.859410   45898 start.go:365] acquiring machines lock for stopped-upgrade-145190: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:00:11.859422   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1024 20:00:11.859394   45898 cache.go:107] acquiring lock: {Name:mk2ae1454cdcf98345f417f995dfd08b4018871a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859435   45898 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 264.999µs
	I1024 20:00:11.859449   45898 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1024 20:00:11.859454   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1024 20:00:11.859492   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1024 20:00:11.859329   45898 cache.go:107] acquiring lock: {Name:mk64b100caaf53cfecea06c9a7cbc7fd3a7c24bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:11.859498   45898 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 234.582µs
	I1024 20:00:11.859502   45898 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 320.307µs
	I1024 20:00:11.859512   45898 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1024 20:00:11.859516   45898 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1024 20:00:11.859522   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1024 20:00:11.859544   45898 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 188.611µs
	I1024 20:00:11.859559   45898 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1024 20:00:11.859570   45898 cache.go:115] /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1024 20:00:11.859578   45898 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 281.446µs
	I1024 20:00:11.859595   45898 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1024 20:00:11.859606   45898 cache.go:87] Successfully saved all images to host disk.
	I1024 20:00:37.350791   45898 start.go:369] acquired machines lock for "stopped-upgrade-145190" in 25.491359348s
	I1024 20:00:37.350838   45898 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:00:37.350846   45898 fix.go:54] fixHost starting: minikube
	I1024 20:00:37.351200   45898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:00:37.351245   45898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:00:37.368587   45898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34903
	I1024 20:00:37.369007   45898 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:00:37.369566   45898 main.go:141] libmachine: Using API Version  1
	I1024 20:00:37.369597   45898 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:00:37.369996   45898 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:00:37.370200   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:00:37.370357   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetState
	I1024 20:00:37.372045   45898 fix.go:102] recreateIfNeeded on stopped-upgrade-145190: state=Stopped err=<nil>
	I1024 20:00:37.372070   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	W1024 20:00:37.372221   45898 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:00:37.541467   45898 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-145190" ...
	I1024 20:00:37.767519   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .Start
	I1024 20:00:37.768743   45898 main.go:141] libmachine: (stopped-upgrade-145190) Ensuring networks are active...
	I1024 20:00:37.769691   45898 main.go:141] libmachine: (stopped-upgrade-145190) Ensuring network default is active
	I1024 20:00:37.770058   45898 main.go:141] libmachine: (stopped-upgrade-145190) Ensuring network minikube-net is active
	I1024 20:00:37.770421   45898 main.go:141] libmachine: (stopped-upgrade-145190) Getting domain xml...
	I1024 20:00:37.771298   45898 main.go:141] libmachine: (stopped-upgrade-145190) Creating domain...
	I1024 20:00:39.583891   45898 main.go:141] libmachine: (stopped-upgrade-145190) Waiting to get IP...
	I1024 20:00:39.584964   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:39.585436   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:39.585560   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:39.585444   46043 retry.go:31] will retry after 253.358086ms: waiting for machine to come up
	I1024 20:00:39.841006   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:39.841581   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:39.841605   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:39.841527   46043 retry.go:31] will retry after 261.130472ms: waiting for machine to come up
	I1024 20:00:40.103995   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:40.104632   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:40.104679   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:40.104595   46043 retry.go:31] will retry after 484.745593ms: waiting for machine to come up
	I1024 20:00:40.591523   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:40.592063   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:40.592105   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:40.592004   46043 retry.go:31] will retry after 408.201061ms: waiting for machine to come up
	I1024 20:00:41.001738   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:41.002310   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:41.002499   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:41.002421   46043 retry.go:31] will retry after 718.029843ms: waiting for machine to come up
	I1024 20:00:41.721813   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:41.722394   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:41.722424   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:41.722335   46043 retry.go:31] will retry after 677.736677ms: waiting for machine to come up
	I1024 20:00:42.401629   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:42.402146   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:42.402189   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:42.402107   46043 retry.go:31] will retry after 741.330571ms: waiting for machine to come up
	I1024 20:00:43.144686   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:43.145241   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:43.145278   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:43.145215   46043 retry.go:31] will retry after 952.989399ms: waiting for machine to come up
	I1024 20:00:44.099801   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:44.100586   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:44.100724   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:44.100673   46043 retry.go:31] will retry after 1.758675467s: waiting for machine to come up
	I1024 20:00:45.861832   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:45.862364   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:45.862388   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:45.862316   46043 retry.go:31] will retry after 1.719359955s: waiting for machine to come up
	I1024 20:00:47.583418   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:47.583934   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:47.583965   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:47.583874   46043 retry.go:31] will retry after 2.481180382s: waiting for machine to come up
	I1024 20:00:50.066567   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:50.067127   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:50.067161   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:50.067068   46043 retry.go:31] will retry after 2.353602911s: waiting for machine to come up
	I1024 20:00:52.422219   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:52.422698   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:52.422719   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:52.422624   46043 retry.go:31] will retry after 4.483565346s: waiting for machine to come up
	I1024 20:00:56.910795   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:56.911318   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:56.911357   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:56.911292   46043 retry.go:31] will retry after 4.743729078s: waiting for machine to come up
	I1024 20:01:01.658510   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:01.659221   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:01:01.659254   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:01:01.659169   46043 retry.go:31] will retry after 5.172231673s: waiting for machine to come up
	I1024 20:01:07.361064   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:07.361441   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:01:07.361476   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:01:07.361408   46043 retry.go:31] will retry after 6.081301338s: waiting for machine to come up
	I1024 20:01:13.444545   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.445222   45898 main.go:141] libmachine: (stopped-upgrade-145190) Found IP for machine: 192.168.50.31
	I1024 20:01:13.445245   45898 main.go:141] libmachine: (stopped-upgrade-145190) Reserving static IP address...
	I1024 20:01:13.445270   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has current primary IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.445764   45898 main.go:141] libmachine: (stopped-upgrade-145190) Reserved static IP address: 192.168.50.31
	I1024 20:01:13.445789   45898 main.go:141] libmachine: (stopped-upgrade-145190) Waiting for SSH to be available...
	I1024 20:01:13.445816   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "stopped-upgrade-145190", mac: "52:54:00:0b:a4:79", ip: "192.168.50.31"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.445861   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-145190", mac: "52:54:00:0b:a4:79", ip: "192.168.50.31"}
	I1024 20:01:13.445873   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | Getting to WaitForSSH function...
	I1024 20:01:13.448058   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.448371   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.448411   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.448492   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | Using SSH client type: external
	I1024 20:01:13.448523   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/stopped-upgrade-145190/id_rsa (-rw-------)
	I1024 20:01:13.448560   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/stopped-upgrade-145190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:01:13.448578   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | About to run SSH command:
	I1024 20:01:13.448609   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | exit 0
	I1024 20:01:13.580912   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | SSH cmd err, output: <nil>: 
	I1024 20:01:13.581253   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetConfigRaw
	I1024 20:01:13.581874   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetIP
	I1024 20:01:13.584500   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.584879   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.584930   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.585152   45898 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/stopped-upgrade-145190/config.json ...
	I1024 20:01:13.585390   45898 machine.go:88] provisioning docker machine ...
	I1024 20:01:13.585408   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:01:13.585581   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetMachineName
	I1024 20:01:13.585701   45898 buildroot.go:166] provisioning hostname "stopped-upgrade-145190"
	I1024 20:01:13.585714   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetMachineName
	I1024 20:01:13.585842   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:13.588574   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.588985   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.589029   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.589137   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:13.589361   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:13.589511   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:13.589686   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:13.589880   45898 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:13.590355   45898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1024 20:01:13.590378   45898 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-145190 && echo "stopped-upgrade-145190" | sudo tee /etc/hostname
	I1024 20:01:13.715319   45898 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-145190
	
	I1024 20:01:13.715342   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:13.718014   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.718366   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.718394   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.718562   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:13.718741   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:13.718869   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:13.718996   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:13.719138   45898 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:13.719464   45898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1024 20:01:13.719487   45898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-145190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-145190/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-145190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:01:13.846068   45898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:01:13.846097   45898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:01:13.846128   45898 buildroot.go:174] setting up certificates
	I1024 20:01:13.846135   45898 provision.go:83] configureAuth start
	I1024 20:01:13.846144   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetMachineName
	I1024 20:01:13.846425   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetIP
	I1024 20:01:13.848927   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.849369   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.849401   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.849564   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:13.851966   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.852315   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:13.852355   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:13.852452   45898 provision.go:138] copyHostCerts
	I1024 20:01:13.852498   45898 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:01:13.852508   45898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:01:13.852570   45898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:01:13.852663   45898 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:01:13.852671   45898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:01:13.852693   45898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:01:13.852756   45898 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:01:13.852765   45898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:01:13.852785   45898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:01:13.852832   45898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-145190 san=[192.168.50.31 192.168.50.31 localhost 127.0.0.1 minikube stopped-upgrade-145190]
	I1024 20:01:14.178088   45898 provision.go:172] copyRemoteCerts
	I1024 20:01:14.178141   45898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:01:14.178164   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:14.181146   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:14.181577   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:14.181616   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:14.181808   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:14.182018   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:14.182174   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:14.182299   45898 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/stopped-upgrade-145190/id_rsa Username:docker}
	I1024 20:01:14.267194   45898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:01:14.281112   45898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:01:14.294343   45898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:01:14.307424   45898 provision.go:86] duration metric: configureAuth took 461.278115ms
	I1024 20:01:14.307454   45898 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:01:14.307598   45898 config.go:182] Loaded profile config "stopped-upgrade-145190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 20:01:14.307670   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:14.310163   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:14.310546   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:14.310572   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:14.310784   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:14.311011   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:14.311199   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:14.311376   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:14.311572   45898 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:14.311951   45898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1024 20:01:14.311976   45898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:01:18.614804   45898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:01:18.614833   45898 machine.go:91] provisioned docker machine in 5.029433226s
	I1024 20:01:18.614850   45898 start.go:300] post-start starting for "stopped-upgrade-145190" (driver="kvm2")
	I1024 20:01:18.614860   45898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:01:18.614884   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:01:18.615241   45898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:01:18.615286   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:18.618242   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.618605   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:18.618644   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.618790   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:18.618994   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:18.619188   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:18.619345   45898 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/stopped-upgrade-145190/id_rsa Username:docker}
	I1024 20:01:18.708196   45898 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:01:18.712431   45898 info.go:137] Remote host: Buildroot 2019.02.7
	I1024 20:01:18.712459   45898 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:01:18.712522   45898 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:01:18.712587   45898 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:01:18.712674   45898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:01:18.718398   45898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:01:18.732099   45898 start.go:303] post-start completed in 117.236488ms
	I1024 20:01:18.732121   45898 fix.go:56] fixHost completed within 41.381275537s
	I1024 20:01:18.732141   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:18.735148   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.735564   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:18.735598   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.735806   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:18.736036   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:18.736234   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:18.736424   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:18.736592   45898 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:18.737059   45898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1024 20:01:18.737079   45898 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1024 20:01:18.857844   45898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698177678.800080186
	
	I1024 20:01:18.857875   45898 fix.go:206] guest clock: 1698177678.800080186
	I1024 20:01:18.857886   45898 fix.go:219] Guest: 2023-10-24 20:01:18.800080186 +0000 UTC Remote: 2023-10-24 20:01:18.732124565 +0000 UTC m=+67.064746088 (delta=67.955621ms)
	I1024 20:01:18.857910   45898 fix.go:190] guest clock delta is within tolerance: 67.955621ms
	I1024 20:01:18.857920   45898 start.go:83] releasing machines lock for "stopped-upgrade-145190", held for 41.507097452s
	I1024 20:01:18.857962   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:01:18.858238   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetIP
	I1024 20:01:18.861290   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.861700   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:18.861730   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.861918   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:01:18.862449   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:01:18.862622   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .DriverName
	I1024 20:01:18.862715   45898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:01:18.862752   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:18.862808   45898 ssh_runner.go:195] Run: cat /version.json
	I1024 20:01:18.862830   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHHostname
	I1024 20:01:18.865232   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.865476   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.865642   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:18.865690   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.865823   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:18.866019   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:a4:79", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:57:01 +0000 UTC Type:0 Mac:52:54:00:0b:a4:79 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:stopped-upgrade-145190 Clientid:01:52:54:00:0b:a4:79}
	I1024 20:01:18.866025   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:18.866044   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined IP address 192.168.50.31 and MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:18.866219   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:18.866223   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHPort
	I1024 20:01:18.866422   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHKeyPath
	I1024 20:01:18.866417   45898 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/stopped-upgrade-145190/id_rsa Username:docker}
	I1024 20:01:18.866589   45898 main.go:141] libmachine: (stopped-upgrade-145190) Calling .GetSSHUsername
	I1024 20:01:18.866709   45898 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/stopped-upgrade-145190/id_rsa Username:docker}
	W1024 20:01:18.976235   45898 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 20:01:18.976308   45898 ssh_runner.go:195] Run: systemctl --version
	I1024 20:01:18.981821   45898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:01:19.045522   45898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:01:19.051471   45898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:01:19.051542   45898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:01:19.056467   45898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 20:01:19.056484   45898 start.go:472] detecting cgroup driver to use...
	I1024 20:01:19.056543   45898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:01:19.067217   45898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:01:19.076252   45898 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:01:19.076300   45898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:01:19.084353   45898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:01:19.091827   45898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 20:01:19.099622   45898 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 20:01:19.099692   45898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:01:19.178654   45898 docker.go:214] disabling docker service ...
	I1024 20:01:19.178722   45898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:01:19.188996   45898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:01:19.196697   45898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:01:19.268439   45898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:01:19.358324   45898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:01:19.366485   45898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:01:19.376992   45898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 20:01:19.377046   45898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:01:19.385591   45898 out.go:177] 
	W1024 20:01:19.386908   45898 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 20:01:19.386932   45898 out.go:239] * 
	* 
	W1024 20:01:19.387825   45898 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:01:19.389444   45898 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-145190 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (287.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-636215 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-636215 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.201371518s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-636215] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-636215 in cluster pause-636215
	* Updating the running kvm2 "pause-636215" VM ...
	* Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-636215" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:00:08.933810   45839 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:00:08.933964   45839 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:08.933979   45839 out.go:309] Setting ErrFile to fd 2...
	I1024 20:00:08.933986   45839 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:08.934228   45839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:00:08.934791   45839 out.go:303] Setting JSON to false
	I1024 20:00:08.935748   45839 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5907,"bootTime":1698171702,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:00:08.935805   45839 start.go:138] virtualization: kvm guest
	I1024 20:00:08.938177   45839 out.go:177] * [pause-636215] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:00:08.939667   45839 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:00:08.941200   45839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:00:08.939732   45839 notify.go:220] Checking for updates...
	I1024 20:00:08.942929   45839 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:00:08.944457   45839 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:00:08.945812   45839 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:00:08.947176   45839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:00:08.949001   45839 config.go:182] Loaded profile config "pause-636215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:00:08.949587   45839 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:00:08.949650   45839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:00:08.966149   45839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I1024 20:00:08.966584   45839 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:00:08.967274   45839 main.go:141] libmachine: Using API Version  1
	I1024 20:00:08.967304   45839 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:00:08.967673   45839 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:00:08.967894   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:08.968142   45839 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:00:08.968467   45839 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:00:08.968508   45839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:00:08.982570   45839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I1024 20:00:08.982910   45839 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:00:08.983344   45839 main.go:141] libmachine: Using API Version  1
	I1024 20:00:08.983363   45839 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:00:08.983766   45839 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:00:08.983936   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:09.019385   45839 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:00:09.020682   45839 start.go:298] selected driver: kvm2
	I1024 20:00:09.020699   45839 start.go:902] validating driver "kvm2" against &{Name:pause-636215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.3 ClusterName:pause-636215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:00:09.020880   45839 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:00:09.021337   45839 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:09.021469   45839 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:00:09.036133   45839 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:00:09.036790   45839 cni.go:84] Creating CNI manager for ""
	I1024 20:00:09.036815   45839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:00:09.036828   45839 start_flags.go:323] config:
	{Name:pause-636215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-636215 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:00:09.037016   45839 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:09.038829   45839 out.go:177] * Starting control plane node pause-636215 in cluster pause-636215
	I1024 20:00:09.040205   45839 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:00:09.040252   45839 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 20:00:09.040262   45839 cache.go:57] Caching tarball of preloaded images
	I1024 20:00:09.040353   45839 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:00:09.040368   45839 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 20:00:09.040539   45839 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/config.json ...
	I1024 20:00:09.040784   45839 start.go:365] acquiring machines lock for pause-636215: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:00:30.763000   45839 start.go:369] acquired machines lock for "pause-636215" in 21.722184227s
	I1024 20:00:30.763051   45839 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:00:30.763059   45839 fix.go:54] fixHost starting: 
	I1024 20:00:30.763470   45839 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:00:30.763530   45839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:00:30.783370   45839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I1024 20:00:30.783781   45839 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:00:30.784319   45839 main.go:141] libmachine: Using API Version  1
	I1024 20:00:30.784349   45839 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:00:30.784728   45839 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:00:30.784954   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:30.785121   45839 main.go:141] libmachine: (pause-636215) Calling .GetState
	I1024 20:00:30.786717   45839 fix.go:102] recreateIfNeeded on pause-636215: state=Running err=<nil>
	W1024 20:00:30.786739   45839 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:00:30.788912   45839 out.go:177] * Updating the running kvm2 "pause-636215" VM ...
	I1024 20:00:30.790350   45839 machine.go:88] provisioning docker machine ...
	I1024 20:00:30.790372   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:30.790587   45839 main.go:141] libmachine: (pause-636215) Calling .GetMachineName
	I1024 20:00:30.790740   45839 buildroot.go:166] provisioning hostname "pause-636215"
	I1024 20:00:30.790769   45839 main.go:141] libmachine: (pause-636215) Calling .GetMachineName
	I1024 20:00:30.790951   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:30.793323   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:30.793710   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:30.793730   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:30.793917   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:30.794080   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:30.794228   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:30.794346   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:30.794477   45839 main.go:141] libmachine: Using SSH client type: native
	I1024 20:00:30.794960   45839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1024 20:00:30.794982   45839 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-636215 && echo "pause-636215" | sudo tee /etc/hostname
	I1024 20:00:30.929620   45839 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-636215
	
	I1024 20:00:30.929651   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:30.932568   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:30.932933   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:30.932964   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:30.933185   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:30.933382   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:30.933558   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:30.933694   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:30.933901   45839 main.go:141] libmachine: Using SSH client type: native
	I1024 20:00:30.934365   45839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1024 20:00:30.934392   45839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-636215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-636215/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-636215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:00:31.047379   45839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:00:31.047406   45839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:00:31.047453   45839 buildroot.go:174] setting up certificates
	I1024 20:00:31.047462   45839 provision.go:83] configureAuth start
	I1024 20:00:31.047479   45839 main.go:141] libmachine: (pause-636215) Calling .GetMachineName
	I1024 20:00:31.047804   45839 main.go:141] libmachine: (pause-636215) Calling .GetIP
	I1024 20:00:31.050585   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.050975   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:31.051004   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.051172   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:31.053626   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.053960   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:31.054001   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.054133   45839 provision.go:138] copyHostCerts
	I1024 20:00:31.054187   45839 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:00:31.054207   45839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:00:31.054271   45839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:00:31.054378   45839 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:00:31.054389   45839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:00:31.054419   45839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:00:31.054499   45839 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:00:31.054509   45839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:00:31.054545   45839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:00:31.054605   45839 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.pause-636215 san=[192.168.39.169 192.168.39.169 localhost 127.0.0.1 minikube pause-636215]
	I1024 20:00:31.259793   45839 provision.go:172] copyRemoteCerts
	I1024 20:00:31.259847   45839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:00:31.259868   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:31.262929   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.263312   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:31.263346   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.263530   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:31.263761   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:31.263943   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:31.264111   45839 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/pause-636215/id_rsa Username:docker}
	I1024 20:00:31.350868   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:00:31.376571   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1024 20:00:31.409655   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:00:31.436355   45839 provision.go:86] duration metric: configureAuth took 388.87268ms
	I1024 20:00:31.436384   45839 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:00:31.436653   45839 config.go:182] Loaded profile config "pause-636215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:00:31.436720   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:31.439893   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.440319   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:31.440350   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:31.440500   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:31.440678   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:31.440828   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:31.440979   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:31.441203   45839 main.go:141] libmachine: Using SSH client type: native
	I1024 20:00:31.441569   45839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1024 20:00:31.441586   45839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:00:37.093533   45839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:00:37.093558   45839 machine.go:91] provisioned docker machine in 6.30319429s
	I1024 20:00:37.093569   45839 start.go:300] post-start starting for "pause-636215" (driver="kvm2")
	I1024 20:00:37.093580   45839 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:00:37.093601   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:37.094007   45839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:00:37.094041   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:37.097091   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.097512   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:37.097550   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.097732   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:37.097909   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:37.098089   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:37.098245   45839 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/pause-636215/id_rsa Username:docker}
	I1024 20:00:37.188145   45839 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:00:37.192305   45839 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:00:37.192328   45839 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:00:37.192409   45839 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:00:37.192514   45839 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:00:37.192645   45839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:00:37.202501   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:00:37.226648   45839 start.go:303] post-start completed in 133.063632ms
	I1024 20:00:37.226674   45839 fix.go:56] fixHost completed within 6.46361494s
	I1024 20:00:37.226698   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:37.229809   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.230271   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:37.230309   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.230434   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:37.230660   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:37.230851   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:37.231049   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:37.231286   45839 main.go:141] libmachine: Using SSH client type: native
	I1024 20:00:37.231770   45839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1024 20:00:37.231786   45839 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1024 20:00:37.350674   45839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698177637.346631213
	
	I1024 20:00:37.350694   45839 fix.go:206] guest clock: 1698177637.346631213
	I1024 20:00:37.350703   45839 fix.go:219] Guest: 2023-10-24 20:00:37.346631213 +0000 UTC Remote: 2023-10-24 20:00:37.226679239 +0000 UTC m=+28.344065734 (delta=119.951974ms)
	I1024 20:00:37.350724   45839 fix.go:190] guest clock delta is within tolerance: 119.951974ms
	I1024 20:00:37.350730   45839 start.go:83] releasing machines lock for "pause-636215", held for 6.587701504s
	I1024 20:00:37.350752   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:37.351461   45839 main.go:141] libmachine: (pause-636215) Calling .GetIP
	I1024 20:00:37.354381   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.354868   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:37.354902   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.355074   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:37.355591   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:37.355879   45839 main.go:141] libmachine: (pause-636215) Calling .DriverName
	I1024 20:00:37.355977   45839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:00:37.356027   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:37.356346   45839 ssh_runner.go:195] Run: cat /version.json
	I1024 20:00:37.356370   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHHostname
	I1024 20:00:37.359126   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.359450   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:37.359493   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.359522   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.359590   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:37.359727   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:37.359872   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:37.359896   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:37.359926   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:37.359990   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHPort
	I1024 20:00:37.360042   45839 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/pause-636215/id_rsa Username:docker}
	I1024 20:00:37.360100   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHKeyPath
	I1024 20:00:37.360202   45839 main.go:141] libmachine: (pause-636215) Calling .GetSSHUsername
	I1024 20:00:37.360303   45839 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/pause-636215/id_rsa Username:docker}
	I1024 20:00:37.447059   45839 ssh_runner.go:195] Run: systemctl --version
	I1024 20:00:37.472796   45839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:00:37.747121   45839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:00:37.754504   45839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:00:37.754581   45839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:00:37.765965   45839 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 20:00:37.765995   45839 start.go:472] detecting cgroup driver to use...
	I1024 20:00:37.766047   45839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:00:37.784612   45839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:00:37.801737   45839 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:00:37.801781   45839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:00:37.816702   45839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:00:37.829660   45839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:00:37.980531   45839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:00:38.145394   45839 docker.go:214] disabling docker service ...
	I1024 20:00:38.145494   45839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:00:38.163579   45839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:00:38.181984   45839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:00:38.327698   45839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:00:38.455251   45839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:00:38.472150   45839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:00:38.495001   45839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:00:38.495086   45839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:00:38.513541   45839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:00:38.513613   45839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:00:38.528088   45839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:00:38.539909   45839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:00:38.551138   45839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:00:38.562900   45839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:00:38.574079   45839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:00:38.587859   45839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:00:38.740250   45839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:00:42.542945   45839 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.802658666s)
	I1024 20:00:42.542973   45839 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:00:42.543042   45839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:00:42.549799   45839 start.go:540] Will wait 60s for crictl version
	I1024 20:00:42.549872   45839 ssh_runner.go:195] Run: which crictl
	I1024 20:00:42.554867   45839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:00:42.610281   45839 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:00:42.610370   45839 ssh_runner.go:195] Run: crio --version
	I1024 20:00:42.661911   45839 ssh_runner.go:195] Run: crio --version
	I1024 20:00:42.727441   45839 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:00:42.728776   45839 main.go:141] libmachine: (pause-636215) Calling .GetIP
	I1024 20:00:42.731914   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:42.732352   45839 main.go:141] libmachine: (pause-636215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:32:38", ip: ""} in network mk-pause-636215: {Iface:virbr1 ExpiryTime:2023-10-24 20:58:42 +0000 UTC Type:0 Mac:52:54:00:88:32:38 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:pause-636215 Clientid:01:52:54:00:88:32:38}
	I1024 20:00:42.732383   45839 main.go:141] libmachine: (pause-636215) DBG | domain pause-636215 has defined IP address 192.168.39.169 and MAC address 52:54:00:88:32:38 in network mk-pause-636215
	I1024 20:00:42.732613   45839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 20:00:42.737338   45839 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:00:42.737406   45839 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:00:42.792350   45839 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:00:42.792373   45839 crio.go:415] Images already preloaded, skipping extraction
	I1024 20:00:42.792428   45839 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:00:42.836140   45839 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:00:42.836165   45839 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:00:42.836263   45839 ssh_runner.go:195] Run: crio config
	I1024 20:00:42.910282   45839 cni.go:84] Creating CNI manager for ""
	I1024 20:00:42.910309   45839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:00:42.910330   45839 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:00:42.910356   45839 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-636215 NodeName:pause-636215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:00:42.910580   45839 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-636215"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:00:42.910698   45839 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-636215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-636215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:00:42.910762   45839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:00:42.923993   45839 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:00:42.924067   45839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:00:42.937032   45839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1024 20:00:42.955829   45839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:00:42.975446   45839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1024 20:00:42.996793   45839 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1024 20:00:43.001026   45839 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215 for IP: 192.168.39.169
	I1024 20:00:43.001065   45839 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:00:43.001246   45839 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:00:43.001341   45839 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:00:43.001448   45839 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/client.key
	I1024 20:00:43.001538   45839 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/apiserver.key.a3c1dd44
	I1024 20:00:43.001600   45839 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/proxy-client.key
	I1024 20:00:43.001740   45839 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:00:43.001782   45839 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:00:43.001798   45839 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:00:43.001839   45839 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:00:43.001877   45839 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:00:43.001924   45839 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:00:43.001993   45839 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:00:43.002925   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:00:43.032317   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1024 20:00:43.057497   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:00:43.083966   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/pause-636215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:00:43.111970   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:00:43.138932   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:00:43.164127   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:00:43.190796   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:00:43.218710   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:00:43.247210   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:00:43.273037   45839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:00:43.300862   45839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:00:43.321942   45839 ssh_runner.go:195] Run: openssl version
	I1024 20:00:43.329264   45839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:00:43.344473   45839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:00:43.350884   45839 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:00:43.350946   45839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:00:43.358437   45839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:00:43.371250   45839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:00:43.384333   45839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:00:43.390659   45839 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:00:43.390731   45839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:00:43.398011   45839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:00:43.409600   45839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:00:43.423452   45839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:00:43.429347   45839 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:00:43.429411   45839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:00:43.436263   45839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:00:43.446659   45839 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:00:43.451545   45839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:00:43.457555   45839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:00:43.463454   45839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:00:43.469364   45839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:00:43.475263   45839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:00:43.481163   45839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:00:43.487002   45839 kubeadm.go:404] StartCluster: {Name:pause-636215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.3 ClusterName:pause-636215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:00:43.487131   45839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:00:43.487180   45839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:00:43.560177   45839 cri.go:89] found id: "f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623"
	I1024 20:00:43.560203   45839 cri.go:89] found id: "b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27"
	I1024 20:00:43.560211   45839 cri.go:89] found id: "d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d"
	I1024 20:00:43.560217   45839 cri.go:89] found id: "516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9"
	I1024 20:00:43.560223   45839 cri.go:89] found id: "2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de"
	I1024 20:00:43.560232   45839 cri.go:89] found id: "6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750"
	I1024 20:00:43.560238   45839 cri.go:89] found id: ""
	I1024 20:00:43.560291   45839 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-636215 -n pause-636215
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-636215 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-636215 logs -n 25: (1.646445508s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-164196           | kubernetes-upgrade-164196 | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| ssh     | -p cilium-784554 sudo cat              | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo cat              | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo find             | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo crio             | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-784554                       | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| start   | -p pause-636215 --memory=2048          | pause-636215              | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 20:00 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-051222              | cert-expiration-051222    | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:59 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-912715            | force-systemd-env-912715  | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| start   | -p force-systemd-flag-569251           | force-systemd-flag-569251 | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 20:00 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-569251 ssh cat      | force-systemd-flag-569251 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-569251           | force-systemd-flag-569251 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	| start   | -p cert-options-116938                 | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-636215                        | pause-636215              | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:01 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-145190              | stopped-upgrade-145190    | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-116938 ssh                | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-116938 -- sudo         | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-116938                 | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	| start   | -p old-k8s-version-467375              | old-k8s-version-467375    | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:00:55.777980   46309 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:00:55.778119   46309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:55.778132   46309 out.go:309] Setting ErrFile to fd 2...
	I1024 20:00:55.778141   46309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:55.778365   46309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:00:55.779116   46309 out.go:303] Setting JSON to false
	I1024 20:00:55.780116   46309 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5954,"bootTime":1698171702,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:00:55.780181   46309 start.go:138] virtualization: kvm guest
	I1024 20:00:55.782853   46309 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:00:55.784305   46309 notify.go:220] Checking for updates...
	I1024 20:00:55.784316   46309 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:00:55.786000   46309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:00:55.787493   46309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:00:55.788958   46309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:00:55.790440   46309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:00:55.791952   46309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:00:55.794214   46309 config.go:182] Loaded profile config "cert-expiration-051222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:00:55.794430   46309 config.go:182] Loaded profile config "pause-636215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:00:55.794558   46309 config.go:182] Loaded profile config "stopped-upgrade-145190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 20:00:55.794658   46309 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:00:55.832272   46309 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 20:00:55.833733   46309 start.go:298] selected driver: kvm2
	I1024 20:00:55.833749   46309 start.go:902] validating driver "kvm2" against <nil>
	I1024 20:00:55.833760   46309 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:00:55.834555   46309 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:55.834652   46309 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:00:55.850467   46309 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:00:55.850516   46309 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 20:00:55.850699   46309 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:00:55.850762   46309 cni.go:84] Creating CNI manager for ""
	I1024 20:00:55.850773   46309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:00:55.850788   46309 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 20:00:55.850796   46309 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:00:55.850945   46309 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:55.853678   46309 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:00:52.422219   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:52.422698   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:52.422719   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:52.422624   46043 retry.go:31] will retry after 4.483565346s: waiting for machine to come up
	I1024 20:00:54.509115   45839 pod_ready.go:102] pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace has status "Ready":"False"
	I1024 20:00:57.009479   45839 pod_ready.go:102] pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace has status "Ready":"False"
	I1024 20:00:58.008355   45839 pod_ready.go:92] pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:00:58.008392   45839 pod_ready.go:81] duration metric: took 5.521332647s waiting for pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:58.008405   45839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:58.014365   45839 pod_ready.go:92] pod "etcd-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:00:58.014386   45839 pod_ready.go:81] duration metric: took 5.973937ms waiting for pod "etcd-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:58.014397   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:55.855002   46309 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:00:55.855044   46309 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:00:55.855052   46309 cache.go:57] Caching tarball of preloaded images
	I1024 20:00:55.855132   46309 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:00:55.855142   46309 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:00:55.855238   46309 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:00:55.855254   46309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json: {Name:mk43f9e728f338b352792f83f227429aee8984c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:00:55.855390   46309 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:00:56.910795   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:56.911318   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:56.911357   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:56.911292   46043 retry.go:31] will retry after 4.743729078s: waiting for machine to come up
	I1024 20:01:01.658510   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:01.659221   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:01:01.659254   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:01:01.659169   46043 retry.go:31] will retry after 5.172231673s: waiting for machine to come up
	I1024 20:01:00.034452   45839 pod_ready.go:102] pod "kube-apiserver-pause-636215" in "kube-system" namespace has status "Ready":"False"
	I1024 20:01:00.534577   45839 pod_ready.go:92] pod "kube-apiserver-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:00.534602   45839 pod_ready.go:81] duration metric: took 2.520198229s waiting for pod "kube-apiserver-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.534614   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.540531   45839 pod_ready.go:92] pod "kube-controller-manager-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:00.540558   45839 pod_ready.go:81] duration metric: took 5.935772ms waiting for pod "kube-controller-manager-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.540569   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d6wlp" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.545517   45839 pod_ready.go:92] pod "kube-proxy-d6wlp" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:00.545537   45839 pod_ready.go:81] duration metric: took 4.961561ms waiting for pod "kube-proxy-d6wlp" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.545546   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:01.205259   45839 pod_ready.go:92] pod "kube-scheduler-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:01.205284   45839 pod_ready.go:81] duration metric: took 659.731837ms waiting for pod "kube-scheduler-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:01.205291   45839 pod_ready.go:38] duration metric: took 8.724685476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:01:01.205321   45839 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:01:01.205387   45839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:01:01.218108   45839 api_server.go:72] duration metric: took 8.868521703s to wait for apiserver process to appear ...
	I1024 20:01:01.218133   45839 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:01:01.218151   45839 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1024 20:01:01.222987   45839 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1024 20:01:01.224345   45839 api_server.go:141] control plane version: v1.28.3
	I1024 20:01:01.224362   45839 api_server.go:131] duration metric: took 6.222893ms to wait for apiserver health ...
	I1024 20:01:01.224369   45839 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:01:01.410663   45839 system_pods.go:59] 6 kube-system pods found
	I1024 20:01:01.410699   45839 system_pods.go:61] "coredns-5dd5756b68-nfdht" [6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca] Running
	I1024 20:01:01.410707   45839 system_pods.go:61] "etcd-pause-636215" [f02ac085-1989-4990-a56f-1bb90cf0ef63] Running
	I1024 20:01:01.410714   45839 system_pods.go:61] "kube-apiserver-pause-636215" [20cb7b9a-b509-482e-8d58-f016e25cbc2b] Running
	I1024 20:01:01.410721   45839 system_pods.go:61] "kube-controller-manager-pause-636215" [28eb8d8d-94e0-498a-9e84-b15d78037e57] Running
	I1024 20:01:01.410726   45839 system_pods.go:61] "kube-proxy-d6wlp" [613a996e-22d0-4368-9200-a74934795f57] Running
	I1024 20:01:01.410732   45839 system_pods.go:61] "kube-scheduler-pause-636215" [70f6650b-1044-4db5-9ee0-707564adb93a] Running
	I1024 20:01:01.410740   45839 system_pods.go:74] duration metric: took 186.365482ms to wait for pod list to return data ...
	I1024 20:01:01.410749   45839 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:01:01.605685   45839 default_sa.go:45] found service account: "default"
	I1024 20:01:01.605723   45839 default_sa.go:55] duration metric: took 194.964417ms for default service account to be created ...
	I1024 20:01:01.605736   45839 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:01:01.809727   45839 system_pods.go:86] 6 kube-system pods found
	I1024 20:01:01.809755   45839 system_pods.go:89] "coredns-5dd5756b68-nfdht" [6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca] Running
	I1024 20:01:01.809761   45839 system_pods.go:89] "etcd-pause-636215" [f02ac085-1989-4990-a56f-1bb90cf0ef63] Running
	I1024 20:01:01.809766   45839 system_pods.go:89] "kube-apiserver-pause-636215" [20cb7b9a-b509-482e-8d58-f016e25cbc2b] Running
	I1024 20:01:01.809770   45839 system_pods.go:89] "kube-controller-manager-pause-636215" [28eb8d8d-94e0-498a-9e84-b15d78037e57] Running
	I1024 20:01:01.809774   45839 system_pods.go:89] "kube-proxy-d6wlp" [613a996e-22d0-4368-9200-a74934795f57] Running
	I1024 20:01:01.809778   45839 system_pods.go:89] "kube-scheduler-pause-636215" [70f6650b-1044-4db5-9ee0-707564adb93a] Running
	I1024 20:01:01.809785   45839 system_pods.go:126] duration metric: took 204.044022ms to wait for k8s-apps to be running ...
	I1024 20:01:01.809792   45839 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:01:01.809838   45839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:01:01.829844   45839 system_svc.go:56] duration metric: took 20.041538ms WaitForService to wait for kubelet.
	I1024 20:01:01.829871   45839 kubeadm.go:581] duration metric: took 9.480290688s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:01:01.829894   45839 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:01:02.006301   45839 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:01:02.006339   45839 node_conditions.go:123] node cpu capacity is 2
	I1024 20:01:02.006356   45839 node_conditions.go:105] duration metric: took 176.456036ms to run NodePressure ...
	I1024 20:01:02.006369   45839 start.go:228] waiting for startup goroutines ...
	I1024 20:01:02.006377   45839 start.go:233] waiting for cluster config update ...
	I1024 20:01:02.006385   45839 start.go:242] writing updated cluster config ...
	I1024 20:01:02.006639   45839 ssh_runner.go:195] Run: rm -f paused
	I1024 20:01:02.060820   45839 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:01:02.064075   45839 out.go:177] * Done! kubectl is now configured to use "pause-636215" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:58:39 UTC, ends at Tue 2023-10-24 20:01:03 UTC. --
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.830294805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177662830279626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=500eac85-572d-46ca-a300-dc38e63c71a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.830940251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b880557-a70c-49aa-96cf-afbab1bd53ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.831026963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b880557-a70c-49aa-96cf-afbab1bd53ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.831345101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b880557-a70c-49aa-96cf-afbab1bd53ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.881244243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ce57c475-9163-44a0-a3af-98280cae936a name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.881324446Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ce57c475-9163-44a0-a3af-98280cae936a name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.882794784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=74df1d61-beae-4ae0-9be4-db79dc2f6c18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.883308565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177662883291133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=74df1d61-beae-4ae0-9be4-db79dc2f6c18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.884514422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d82bd931-2675-4759-9bf5-99fe34197cf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.884585176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d82bd931-2675-4759-9bf5-99fe34197cf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.884844563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d82bd931-2675-4759-9bf5-99fe34197cf1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.928862236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b3e4f9cb-30f4-45d7-b650-38d2df3d12e4 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.928946396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b3e4f9cb-30f4-45d7-b650-38d2df3d12e4 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.931000308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ff758d5a-95ab-4075-ba4b-59331591dd65 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.931427394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177662931414420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ff758d5a-95ab-4075-ba4b-59331591dd65 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.932368453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8958d55-a000-40b1-9563-1985b6198b1b name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.932452163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8958d55-a000-40b1-9563-1985b6198b1b name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.932720730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8958d55-a000-40b1-9563-1985b6198b1b name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.976397406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3c6b967e-638d-42b5-89bc-8470cc90bbf5 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.976450573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3c6b967e-638d-42b5-89bc-8470cc90bbf5 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.977782945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=549ddaaa-b767-4a32-a282-49ef859028a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.978320080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177662978297827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=549ddaaa-b767-4a32-a282-49ef859028a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.988118093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ee96fa48-d056-4e69-82f0-d0ed6d9aff68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.988582653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ee96fa48-d056-4e69-82f0-d0ed6d9aff68 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:02 pause-636215 crio[2390]: time="2023-10-24 20:01:02.989235392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ee96fa48-d056-4e69-82f0-d0ed6d9aff68 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6b62b080569aa       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   15 seconds ago       Running             kube-proxy                1                   0a01d81fa6d17       kube-proxy-d6wlp
	86e4b781ef0d2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 seconds ago       Running             coredns                   1                   43ac1a5b9c9da       coredns-5dd5756b68-nfdht
	fed486f99e6b9       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   17 seconds ago       Running             kube-scheduler            1                   1a785aea93bd8       kube-scheduler-pause-636215
	e3de53b518be4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   17 seconds ago       Running             etcd                      1                   52ce598b374b4       etcd-pause-636215
	ac86699cccee1       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   18 seconds ago       Running             kube-controller-manager   1                   fd5b5b1bcef69       kube-controller-manager-pause-636215
	2aa4a34794850       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   18 seconds ago       Running             kube-apiserver            1                   ed8308e2ad8cd       kube-apiserver-pause-636215
	f56526104e12e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   6ba69bfbf2709       coredns-5dd5756b68-nfdht
	b63ca9ca3b568       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                0                   bc7bc4404b7a9       kube-proxy-d6wlp
	d39b2fa0becbb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   d9947bce96b41       etcd-pause-636215
	516961cc67010       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   About a minute ago   Exited              kube-controller-manager   0                   e668a7fc998df       kube-controller-manager-pause-636215
	2ca8fe6bdc353       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   About a minute ago   Exited              kube-scheduler            0                   b1477f94e3df0       kube-scheduler-pause-636215
	6c34fb3e40fd4       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   About a minute ago   Exited              kube-apiserver            0                   76fc5dd9ae400       kube-apiserver-pause-636215
	
	* 
	* ==> coredns [86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58232 - 19053 "HINFO IN 2223351225957930505.8595811934903700078. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008637598s
	
	* 
	* ==> coredns [f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-636215
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-636215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=pause-636215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_59_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:59:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-636215
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:00:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    pause-636215
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 0be8918910684939999ff169bd67f488
	  System UUID:                0be89189-1068-4939-999f-f169bd67f488
	  Boot ID:                    f988ab5d-0742-4b8b-8aaa-aadec0bdc029
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-nfdht                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     97s
	  kube-system                 etcd-pause-636215                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-pause-636215             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-controller-manager-pause-636215    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-d6wlp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-pause-636215             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 94s   kube-proxy       
	  Normal  Starting                 12s   kube-proxy       
	  Normal  Starting                 111s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  111s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  111s  kubelet          Node pause-636215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s  kubelet          Node pause-636215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s  kubelet          Node pause-636215 status is now: NodeHasSufficientPID
	  Normal  NodeReady                111s  kubelet          Node pause-636215 status is now: NodeReady
	  Normal  RegisteredNode           98s   node-controller  Node pause-636215 event: Registered Node pause-636215 in Controller
	  Normal  RegisteredNode           0s    node-controller  Node pause-636215 event: Registered Node pause-636215 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.650739] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166061] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.324317] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.694535] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.098205] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.138338] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.136986] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.250709] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[Oct24 19:59] systemd-fstab-generator[931]: Ignoring "noauto" for root device
	[  +8.781282] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Oct24 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +30.331790] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.155781] systemd-fstab-generator[2112]: Ignoring "noauto" for root device
	[  +0.198113] systemd-fstab-generator[2126]: Ignoring "noauto" for root device
	[  +0.135747] systemd-fstab-generator[2137]: Ignoring "noauto" for root device
	[  +0.260948] systemd-fstab-generator[2160]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d] <==
	* {"level":"info","ts":"2023-10-24T19:59:25.982868Z","caller":"traceutil/trace.go:171","msg":"trace[719258477] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"252.540535ms","start":"2023-10-24T19:59:25.730322Z","end":"2023-10-24T19:59:25.982862Z","steps":["trace[719258477] 'process raft request'  (duration: 248.007091ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:25.991003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.590116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-636215\" ","response":"range_response_count:1 size:5473"}
	{"level":"info","ts":"2023-10-24T19:59:25.991169Z","caller":"traceutil/trace.go:171","msg":"trace[359796920] range","detail":"{range_begin:/registry/minions/pause-636215; range_end:; response_count:1; response_revision:300; }","duration":"101.751882ms","start":"2023-10-24T19:59:25.889399Z","end":"2023-10-24T19:59:25.991151Z","steps":["trace[359796920] 'agreement among raft nodes before linearized reading'  (duration: 101.538803ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:25.99203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.472191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:59:25.992535Z","caller":"traceutil/trace.go:171","msg":"trace[1120418816] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:300; }","duration":"160.977866ms","start":"2023-10-24T19:59:25.831544Z","end":"2023-10-24T19:59:25.992522Z","steps":["trace[1120418816] 'agreement among raft nodes before linearized reading'  (duration: 160.363723ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:25.993633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.974188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2023-10-24T19:59:25.993711Z","caller":"traceutil/trace.go:171","msg":"trace[1138855935] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:300; }","duration":"260.055313ms","start":"2023-10-24T19:59:25.733646Z","end":"2023-10-24T19:59:25.993702Z","steps":["trace[1138855935] 'agreement among raft nodes before linearized reading'  (duration: 259.785364ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:59:50.096712Z","caller":"traceutil/trace.go:171","msg":"trace[1520055643] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"145.308771ms","start":"2023-10-24T19:59:49.951376Z","end":"2023-10-24T19:59:50.096685Z","steps":["trace[1520055643] 'process raft request'  (duration: 145.027712ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:50.836869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.386762ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16466157921071883274 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.169\" mod_revision:374 > success:<request_put:<key:\"/registry/masterleases/192.168.39.169\" value_size:67 lease:7242785884217107464 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.169\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-24T19:59:50.837205Z","caller":"traceutil/trace.go:171","msg":"trace[1295792892] linearizableReadLoop","detail":"{readStateIndex:397; appliedIndex:396; }","duration":"247.09247ms","start":"2023-10-24T19:59:50.590097Z","end":"2023-10-24T19:59:50.837189Z","steps":["trace[1295792892] 'read index received'  (duration: 116.516684ms)","trace[1295792892] 'applied index is now lower than readState.Index'  (duration: 130.573793ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:59:50.837269Z","caller":"traceutil/trace.go:171","msg":"trace[918706409] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"258.828922ms","start":"2023-10-24T19:59:50.578422Z","end":"2023-10-24T19:59:50.837251Z","steps":["trace[918706409] 'process raft request'  (duration: 128.206124ms)","trace[918706409] 'compare'  (duration: 129.185287ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:59:50.837601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.841492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nfdht\" ","response":"range_response_count:1 size:4736"}
	{"level":"info","ts":"2023-10-24T19:59:50.837674Z","caller":"traceutil/trace.go:171","msg":"trace[1201623625] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nfdht; range_end:; response_count:1; response_revision:378; }","duration":"238.915333ms","start":"2023-10-24T19:59:50.598749Z","end":"2023-10-24T19:59:50.837665Z","steps":["trace[1201623625] 'agreement among raft nodes before linearized reading'  (duration: 238.814289ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:50.837379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.365649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:59:50.837867Z","caller":"traceutil/trace.go:171","msg":"trace[366836922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:378; }","duration":"247.868101ms","start":"2023-10-24T19:59:50.589988Z","end":"2023-10-24T19:59:50.837857Z","steps":["trace[366836922] 'agreement among raft nodes before linearized reading'  (duration: 247.310218ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:00:31.582367Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-24T20:00:31.582471Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-636215","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"]}
	{"level":"warn","ts":"2023-10-24T20:00:31.582586Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T20:00:31.582672Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T20:00:31.676117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.169:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T20:00:31.676226Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.169:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-24T20:00:31.676406Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"88d17c48ad0ae483","current-leader-member-id":"88d17c48ad0ae483"}
	{"level":"info","ts":"2023-10-24T20:00:31.680307Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:31.680626Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:31.680662Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-636215","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"]}
	
	* 
	* ==> etcd [e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d] <==
	* {"level":"info","ts":"2023-10-24T20:00:47.819693Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T20:00:47.819719Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T20:00:47.820143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 switched to configuration voters=(9858797710873388163)"}
	{"level":"info","ts":"2023-10-24T20:00:47.822221Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T20:00:47.822805Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"88d17c48ad0ae483","initial-advertise-peer-urls":["https://192.168.39.169:2380"],"listen-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.169:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T20:00:47.82286Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T20:00:47.822254Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:47.822921Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:47.823447Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dd1030519101f266","local-member-id":"88d17c48ad0ae483","added-peer-id":"88d17c48ad0ae483","added-peer-peer-urls":["https://192.168.39.169:2380"]}
	{"level":"info","ts":"2023-10-24T20:00:47.823937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd1030519101f266","local-member-id":"88d17c48ad0ae483","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:00:47.824208Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:00:49.105254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T20:00:49.105328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T20:00:49.105353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 received MsgPreVoteResp from 88d17c48ad0ae483 at term 2"}
	{"level":"info","ts":"2023-10-24T20:00:49.105365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.105371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 received MsgVoteResp from 88d17c48ad0ae483 at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.105379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became leader at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.105386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 88d17c48ad0ae483 elected leader 88d17c48ad0ae483 at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.107973Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"88d17c48ad0ae483","local-member-attributes":"{Name:pause-636215 ClientURLs:[https://192.168.39.169:2379]}","request-path":"/0/members/88d17c48ad0ae483/attributes","cluster-id":"dd1030519101f266","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T20:00:49.108007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:00:49.108361Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:00:49.109883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T20:00:49.11113Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T20:00:49.111258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T20:00:49.109883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.169:2379"}
	
	* 
	* ==> kernel <==
	*  20:01:03 up 2 min,  0 users,  load average: 1.23, 0.53, 0.20
	Linux pause-636215 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09] <==
	* I1024 20:00:50.737461       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1024 20:00:50.737755       1 controller.go:134] Starting OpenAPI controller
	I1024 20:00:50.739241       1 controller.go:85] Starting OpenAPI V3 controller
	I1024 20:00:50.739310       1 naming_controller.go:291] Starting NamingConditionController
	I1024 20:00:50.739361       1 establishing_controller.go:76] Starting EstablishingController
	I1024 20:00:50.739414       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1024 20:00:50.739461       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1024 20:00:50.739504       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1024 20:00:50.852839       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 20:00:50.865581       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 20:00:50.926737       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1024 20:00:50.926780       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1024 20:00:50.932865       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 20:00:50.935439       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 20:00:50.935511       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 20:00:50.936774       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 20:00:50.937317       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 20:00:50.937368       1 aggregator.go:166] initial CRD sync complete...
	I1024 20:00:50.937379       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 20:00:50.937383       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 20:00:50.937388       1 cache.go:39] Caches are synced for autoregister controller
	E1024 20:00:50.962641       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1024 20:00:51.734333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 20:01:03.270388       1 controller.go:624] quota admission added evaluator for: endpoints
	I1024 20:01:03.344912       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 20:00:31.615752       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 20:00:31.621017       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 20:00:31.621257       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9] <==
	* I1024 19:59:25.293434       1 shared_informer.go:318] Caches are synced for HPA
	I1024 19:59:25.366707       1 range_allocator.go:380] "Set node PodCIDR" node="pause-636215" podCIDRs=["10.244.0.0/24"]
	I1024 19:59:25.667471       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:59:25.687272       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:59:25.687336       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1024 19:59:26.001350       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1024 19:59:26.111332       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d6wlp"
	I1024 19:59:26.150916       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-g85kj"
	I1024 19:59:26.216675       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nfdht"
	I1024 19:59:26.247441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="261.438949ms"
	I1024 19:59:26.315411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.871532ms"
	I1024 19:59:26.315679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.162µs"
	I1024 19:59:26.347497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="230.911µs"
	I1024 19:59:26.371027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.517µs"
	I1024 19:59:26.432360       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1024 19:59:26.473170       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-g85kj"
	I1024 19:59:26.486330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.631057ms"
	I1024 19:59:26.497590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.20237ms"
	I1024 19:59:26.498528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="426.931µs"
	I1024 19:59:28.283702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.758µs"
	I1024 19:59:28.297410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.141µs"
	I1024 19:59:28.317683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.176µs"
	I1024 19:59:29.316702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="230.497µs"
	I1024 20:00:07.527835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.705044ms"
	I1024 20:00:07.528288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.241µs"
	
	* 
	* ==> kube-controller-manager [ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f] <==
	* I1024 20:01:03.243570       1 shared_informer.go:318] Caches are synced for persistent volume
	I1024 20:01:03.254201       1 shared_informer.go:318] Caches are synced for expand
	I1024 20:01:03.260718       1 shared_informer.go:318] Caches are synced for endpoint
	I1024 20:01:03.267129       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1024 20:01:03.267333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.844µs"
	I1024 20:01:03.291550       1 shared_informer.go:318] Caches are synced for HPA
	I1024 20:01:03.295554       1 shared_informer.go:318] Caches are synced for stateful set
	I1024 20:01:03.300473       1 shared_informer.go:318] Caches are synced for node
	I1024 20:01:03.300804       1 range_allocator.go:174] "Sending events to api server"
	I1024 20:01:03.300931       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1024 20:01:03.300958       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1024 20:01:03.301180       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1024 20:01:03.301308       1 shared_informer.go:318] Caches are synced for crt configmap
	I1024 20:01:03.302588       1 shared_informer.go:318] Caches are synced for GC
	I1024 20:01:03.304833       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1024 20:01:03.307141       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1024 20:01:03.309687       1 shared_informer.go:318] Caches are synced for TTL
	I1024 20:01:03.311107       1 shared_informer.go:318] Caches are synced for deployment
	I1024 20:01:03.312451       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1024 20:01:03.324785       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1024 20:01:03.330521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1024 20:01:03.332877       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1024 20:01:03.334532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1024 20:01:03.363375       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 20:01:03.380821       1 shared_informer.go:318] Caches are synced for resource quota
	
	* 
	* ==> kube-proxy [6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081] <==
	* I1024 20:00:48.272795       1 server_others.go:69] "Using iptables proxy"
	I1024 20:00:50.883442       1 node.go:141] Successfully retrieved node IP: 192.168.39.169
	I1024 20:00:51.016683       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:00:51.016769       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:00:51.021518       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:00:51.021703       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:00:51.022015       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:00:51.022998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:00:51.025178       1 config.go:188] "Starting service config controller"
	I1024 20:00:51.025463       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:00:51.025615       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:00:51.025758       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:00:51.029199       1 config.go:315] "Starting node config controller"
	I1024 20:00:51.029254       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:00:51.126544       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:00:51.126680       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:00:51.129351       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27] <==
	* I1024 19:59:28.729311       1 server_others.go:69] "Using iptables proxy"
	I1024 19:59:28.759363       1 node.go:141] Successfully retrieved node IP: 192.168.39.169
	I1024 19:59:28.855830       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 19:59:28.855930       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 19:59:28.862019       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:59:28.862974       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:59:28.863463       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:59:28.863516       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:59:28.866778       1 config.go:188] "Starting service config controller"
	I1024 19:59:28.867497       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:59:28.867578       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:59:28.867588       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:59:28.871305       1 config.go:315] "Starting node config controller"
	I1024 19:59:28.871348       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:59:28.968314       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:59:28.968641       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:59:28.971841       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de] <==
	* E1024 19:59:08.791366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:59:08.791199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:59:08.791376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 19:59:09.610574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:59:09.610679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:59:09.724738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:59:09.724834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:59:09.746379       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:59:09.746464       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:59:09.752393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:59:09.752448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1024 19:59:09.842296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:59:09.842519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:59:09.928113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:59:09.928199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:59:09.940578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:59:09.940677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1024 19:59:10.007610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 19:59:10.007706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1024 19:59:10.018588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:59:10.018668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1024 19:59:10.081260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:59:10.081338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1024 19:59:11.474927       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1024 20:00:31.598813       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5] <==
	* I1024 20:00:48.019627       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:00:50.824137       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:00:50.824273       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:00:50.824288       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:00:50.824295       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:00:50.882876       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:00:50.882999       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:00:50.885784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:00:50.886495       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:00:50.886629       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:00:50.886757       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:00:50.987005       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:58:39 UTC, ends at Tue 2023-10-24 20:01:04 UTC. --
	Oct 24 20:00:42 pause-636215 kubelet[1272]: I1024 20:00:42.171253    1272 status_manager.go:853] "Failed to get status for pod" podUID="21b1b3cc77ec96c7c17920c751192fef" pod="kube-system/kube-controller-manager-pause-636215" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-636215\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: I1024 20:00:42.171431    1272 status_manager.go:853] "Failed to get status for pod" podUID="45d05616c6f89165d21cdd2d079b07b9" pod="kube-system/kube-apiserver-pause-636215" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-636215\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.306263    1272 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.306435    1272 kubelet.go:2840] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.333877    1272 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.334131    1272 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.523417    1272 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.523463    1272 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.523482    1272 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.645891    1272 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-636215.1791238f04673d64", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-636215", UID:"45d05616c6f89165d21cdd2d079b07b9", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://192.168.39.169:8443/readyz\": EOF", Source:v1.EventSource
{Component:"kubelet", Host:"pause-636215"}, FirstTimestamp:time.Date(2023, time.October, 24, 20, 0, 31, 656557924, time.Local), LastTimestamp:time.Date(2023, time.October, 24, 20, 0, 31, 656557924, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-636215"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.39.169:8443: connect: connection refused'(may retry after sleeping)
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.534992    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a34640eb72ebfb29982cd29c5c46478a4ec1ab2117060277038ef7583327008"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.556629    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0add9ee1f3204aea13f669b6997dfb8d915e188da85cabe014395dd9656fa928"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.578370    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aebd77244f5223d392efa1a63157baf9aea9029ad82d8cc4ac77d877e826758"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.587341    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d174073fc1dcdef00d0adc5a3321dfa5d8e9f8fb9f6eaf79fe527375154aa21"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.601993    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f507dc38a3ef393b26695ff082e7d5f54ae74985848113ed46ed84acf153079"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.641295    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e828ea9dd44b4316d065027f85e3acef67995c0117f532e2c0bdd8c4a0edc8a4"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.383859    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.384365    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.384677    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.384972    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.385301    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.385353    1272 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Oct 24 20:00:46 pause-636215 kubelet[1272]: E1024 20:00:46.225957    1272 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused" interval="7s"
	Oct 24 20:00:50 pause-636215 kubelet[1272]: E1024 20:00:50.779811    1272 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 24 20:00:50 pause-636215 kubelet[1272]: E1024 20:00:50.779960    1272 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-636215 -n pause-636215
helpers_test.go:261: (dbg) Run:  kubectl --context pause-636215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-636215 -n pause-636215
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-636215 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-636215 logs -n 25: (1.470212614s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-164196           | kubernetes-upgrade-164196 | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| ssh     | -p cilium-784554 sudo cat              | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo cat              | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo                  | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo find             | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-784554 sudo crio             | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-784554                       | cilium-784554             | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| start   | -p pause-636215 --memory=2048          | pause-636215              | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 20:00 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-051222              | cert-expiration-051222    | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:59 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-912715            | force-systemd-env-912715  | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| start   | -p force-systemd-flag-569251           | force-systemd-flag-569251 | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 20:00 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-569251 ssh cat      | force-systemd-flag-569251 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-569251           | force-systemd-flag-569251 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	| start   | -p cert-options-116938                 | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-636215                        | pause-636215              | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:01 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-145190              | stopped-upgrade-145190    | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-116938 ssh                | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-116938 -- sudo         | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-116938                 | cert-options-116938       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	| start   | -p old-k8s-version-467375              | old-k8s-version-467375    | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:00:55.777980   46309 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:00:55.778119   46309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:55.778132   46309 out.go:309] Setting ErrFile to fd 2...
	I1024 20:00:55.778141   46309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:00:55.778365   46309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:00:55.779116   46309 out.go:303] Setting JSON to false
	I1024 20:00:55.780116   46309 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5954,"bootTime":1698171702,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:00:55.780181   46309 start.go:138] virtualization: kvm guest
	I1024 20:00:55.782853   46309 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:00:55.784305   46309 notify.go:220] Checking for updates...
	I1024 20:00:55.784316   46309 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:00:55.786000   46309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:00:55.787493   46309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:00:55.788958   46309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:00:55.790440   46309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:00:55.791952   46309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:00:55.794214   46309 config.go:182] Loaded profile config "cert-expiration-051222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:00:55.794430   46309 config.go:182] Loaded profile config "pause-636215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:00:55.794558   46309 config.go:182] Loaded profile config "stopped-upgrade-145190": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 20:00:55.794658   46309 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:00:55.832272   46309 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 20:00:55.833733   46309 start.go:298] selected driver: kvm2
	I1024 20:00:55.833749   46309 start.go:902] validating driver "kvm2" against <nil>
	I1024 20:00:55.833760   46309 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:00:55.834555   46309 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:55.834652   46309 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:00:55.850467   46309 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:00:55.850516   46309 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 20:00:55.850699   46309 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:00:55.850762   46309 cni.go:84] Creating CNI manager for ""
	I1024 20:00:55.850773   46309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:00:55.850788   46309 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 20:00:55.850796   46309 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:00:55.850945   46309 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:00:55.853678   46309 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:00:52.422219   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:52.422698   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:52.422719   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:52.422624   46043 retry.go:31] will retry after 4.483565346s: waiting for machine to come up
	I1024 20:00:54.509115   45839 pod_ready.go:102] pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace has status "Ready":"False"
	I1024 20:00:57.009479   45839 pod_ready.go:102] pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace has status "Ready":"False"
	I1024 20:00:58.008355   45839 pod_ready.go:92] pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:00:58.008392   45839 pod_ready.go:81] duration metric: took 5.521332647s waiting for pod "coredns-5dd5756b68-nfdht" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:58.008405   45839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:58.014365   45839 pod_ready.go:92] pod "etcd-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:00:58.014386   45839 pod_ready.go:81] duration metric: took 5.973937ms waiting for pod "etcd-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:58.014397   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:00:55.855002   46309 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:00:55.855044   46309 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:00:55.855052   46309 cache.go:57] Caching tarball of preloaded images
	I1024 20:00:55.855132   46309 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:00:55.855142   46309 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:00:55.855238   46309 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:00:55.855254   46309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json: {Name:mk43f9e728f338b352792f83f227429aee8984c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:00:55.855390   46309 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:00:56.910795   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:00:56.911318   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:00:56.911357   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:00:56.911292   46043 retry.go:31] will retry after 4.743729078s: waiting for machine to come up
	I1024 20:01:01.658510   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | domain stopped-upgrade-145190 has defined MAC address 52:54:00:0b:a4:79 in network minikube-net
	I1024 20:01:01.659221   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | unable to find current IP address of domain stopped-upgrade-145190 in network minikube-net
	I1024 20:01:01.659254   45898 main.go:141] libmachine: (stopped-upgrade-145190) DBG | I1024 20:01:01.659169   46043 retry.go:31] will retry after 5.172231673s: waiting for machine to come up
	I1024 20:01:00.034452   45839 pod_ready.go:102] pod "kube-apiserver-pause-636215" in "kube-system" namespace has status "Ready":"False"
	I1024 20:01:00.534577   45839 pod_ready.go:92] pod "kube-apiserver-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:00.534602   45839 pod_ready.go:81] duration metric: took 2.520198229s waiting for pod "kube-apiserver-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.534614   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.540531   45839 pod_ready.go:92] pod "kube-controller-manager-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:00.540558   45839 pod_ready.go:81] duration metric: took 5.935772ms waiting for pod "kube-controller-manager-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.540569   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d6wlp" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.545517   45839 pod_ready.go:92] pod "kube-proxy-d6wlp" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:00.545537   45839 pod_ready.go:81] duration metric: took 4.961561ms waiting for pod "kube-proxy-d6wlp" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:00.545546   45839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:01.205259   45839 pod_ready.go:92] pod "kube-scheduler-pause-636215" in "kube-system" namespace has status "Ready":"True"
	I1024 20:01:01.205284   45839 pod_ready.go:81] duration metric: took 659.731837ms waiting for pod "kube-scheduler-pause-636215" in "kube-system" namespace to be "Ready" ...
	I1024 20:01:01.205291   45839 pod_ready.go:38] duration metric: took 8.724685476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:01:01.205321   45839 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:01:01.205387   45839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:01:01.218108   45839 api_server.go:72] duration metric: took 8.868521703s to wait for apiserver process to appear ...
	I1024 20:01:01.218133   45839 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:01:01.218151   45839 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1024 20:01:01.222987   45839 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1024 20:01:01.224345   45839 api_server.go:141] control plane version: v1.28.3
	I1024 20:01:01.224362   45839 api_server.go:131] duration metric: took 6.222893ms to wait for apiserver health ...
	I1024 20:01:01.224369   45839 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:01:01.410663   45839 system_pods.go:59] 6 kube-system pods found
	I1024 20:01:01.410699   45839 system_pods.go:61] "coredns-5dd5756b68-nfdht" [6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca] Running
	I1024 20:01:01.410707   45839 system_pods.go:61] "etcd-pause-636215" [f02ac085-1989-4990-a56f-1bb90cf0ef63] Running
	I1024 20:01:01.410714   45839 system_pods.go:61] "kube-apiserver-pause-636215" [20cb7b9a-b509-482e-8d58-f016e25cbc2b] Running
	I1024 20:01:01.410721   45839 system_pods.go:61] "kube-controller-manager-pause-636215" [28eb8d8d-94e0-498a-9e84-b15d78037e57] Running
	I1024 20:01:01.410726   45839 system_pods.go:61] "kube-proxy-d6wlp" [613a996e-22d0-4368-9200-a74934795f57] Running
	I1024 20:01:01.410732   45839 system_pods.go:61] "kube-scheduler-pause-636215" [70f6650b-1044-4db5-9ee0-707564adb93a] Running
	I1024 20:01:01.410740   45839 system_pods.go:74] duration metric: took 186.365482ms to wait for pod list to return data ...
	I1024 20:01:01.410749   45839 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:01:01.605685   45839 default_sa.go:45] found service account: "default"
	I1024 20:01:01.605723   45839 default_sa.go:55] duration metric: took 194.964417ms for default service account to be created ...
	I1024 20:01:01.605736   45839 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:01:01.809727   45839 system_pods.go:86] 6 kube-system pods found
	I1024 20:01:01.809755   45839 system_pods.go:89] "coredns-5dd5756b68-nfdht" [6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca] Running
	I1024 20:01:01.809761   45839 system_pods.go:89] "etcd-pause-636215" [f02ac085-1989-4990-a56f-1bb90cf0ef63] Running
	I1024 20:01:01.809766   45839 system_pods.go:89] "kube-apiserver-pause-636215" [20cb7b9a-b509-482e-8d58-f016e25cbc2b] Running
	I1024 20:01:01.809770   45839 system_pods.go:89] "kube-controller-manager-pause-636215" [28eb8d8d-94e0-498a-9e84-b15d78037e57] Running
	I1024 20:01:01.809774   45839 system_pods.go:89] "kube-proxy-d6wlp" [613a996e-22d0-4368-9200-a74934795f57] Running
	I1024 20:01:01.809778   45839 system_pods.go:89] "kube-scheduler-pause-636215" [70f6650b-1044-4db5-9ee0-707564adb93a] Running
	I1024 20:01:01.809785   45839 system_pods.go:126] duration metric: took 204.044022ms to wait for k8s-apps to be running ...
	I1024 20:01:01.809792   45839 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:01:01.809838   45839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:01:01.829844   45839 system_svc.go:56] duration metric: took 20.041538ms WaitForService to wait for kubelet.
	I1024 20:01:01.829871   45839 kubeadm.go:581] duration metric: took 9.480290688s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:01:01.829894   45839 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:01:02.006301   45839 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:01:02.006339   45839 node_conditions.go:123] node cpu capacity is 2
	I1024 20:01:02.006356   45839 node_conditions.go:105] duration metric: took 176.456036ms to run NodePressure ...
	I1024 20:01:02.006369   45839 start.go:228] waiting for startup goroutines ...
	I1024 20:01:02.006377   45839 start.go:233] waiting for cluster config update ...
	I1024 20:01:02.006385   45839 start.go:242] writing updated cluster config ...
	I1024 20:01:02.006639   45839 ssh_runner.go:195] Run: rm -f paused
	I1024 20:01:02.060820   45839 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:01:02.064075   45839 out.go:177] * Done! kubectl is now configured to use "pause-636215" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 19:58:39 UTC, ends at Tue 2023-10-24 20:01:05 UTC. --
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.142794764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177665142782409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=95832f2e-9fb4-4e6c-aee8-2c4039a9ab91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.143556723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4f9c36d-40ee-4ec0-bf71-40cf71993495 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.143603283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4f9c36d-40ee-4ec0-bf71-40cf71993495 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.143824816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4f9c36d-40ee-4ec0-bf71-40cf71993495 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.184716624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=65ec0431-0ca6-4c84-ad6f-619019037f58 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.184793614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=65ec0431-0ca6-4c84-ad6f-619019037f58 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.186684598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8283674a-b820-40db-8c86-77ef959c09d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.187156420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177665187136653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=8283674a-b820-40db-8c86-77ef959c09d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.188717237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8ea4342e-c173-42a6-8f65-82e04f0398fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.188769358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8ea4342e-c173-42a6-8f65-82e04f0398fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.188984979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8ea4342e-c173-42a6-8f65-82e04f0398fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.239524950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6b07def9-88a7-4774-8486-e2d99927ddee name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.239581109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6b07def9-88a7-4774-8486-e2d99927ddee name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.241524849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab7adc6f-c881-45bb-81e8-361bd4337144 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.241899137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177665241884958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ab7adc6f-c881-45bb-81e8-361bd4337144 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.242648086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=02c43273-a5ad-4533-9b46-152d3a2e0fb6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.242733343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=02c43273-a5ad-4533-9b46-152d3a2e0fb6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.242992442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=02c43273-a5ad-4533-9b46-152d3a2e0fb6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.290512714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5fb4b0c2-290c-428c-a7a3-cfb62e1543ab name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.290594478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5fb4b0c2-290c-428c-a7a3-cfb62e1543ab name=/runtime.v1.RuntimeService/Version
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.291893297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=735d9f63-35a9-4f18-8efa-00a2ebb89ca6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.292369125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698177665292353887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=735d9f63-35a9-4f18-8efa-00a2ebb89ca6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.292942059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c0e438d-c4cd-492e-b042-1f8aca76a914 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.293014149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c0e438d-c4cd-492e-b042-1f8aca76a914 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:01:05 pause-636215 crio[2390]: time="2023-10-24 20:01:05.293304216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081,PodSandboxId:0a01d81fa6d17ecb246fc9c82d668f15c803a4fe6929301234cba8b949ce66ea,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698177648060765067,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29,PodSandboxId:43ac1a5b9c9da3ea13de9456269b9991691d1c5fed1873b91bebaaa7a41c5dac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698177646855404768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5,PodSandboxId:1a785aea93bd8195d06e9468b9cddd055f22d1c6d05f01dad632a7927259439d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698177645625871880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab
57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d,PodSandboxId:52ce598b374b43d47c032a73e17c7eeebc0b94c7e0fa3001bea58bef5c38badf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698177645349828463,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f,PodSandboxId:fd5b5b1bcef69731eee8c00bc1f325e543a61bb4844ed39bce17bd48b7278007,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698177644949201167,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09,PodSandboxId:ed8308e2ad8cdf8ffc3803bed0a2a95943f7b720942de8d46e26c0053c9104b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698177644713141870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623,PodSandboxId:6ba69bfbf2709135969e23fceba66eee0ed0f83df1fb30a7765a83991662ab86,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1698177568490939763,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nfdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ce8a3a2-b4f0-4b9a-927a-e21e316be0ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5aeb9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27,PodSandboxId:bc7bc4404b7a9763616a5460fd1f14c041c82b901dc35272ebd5599ac12bf4e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1698177568081665412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6wlp,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 613a996e-22d0-4368-9200-a74934795f57,},Annotations:map[string]string{io.kubernetes.container.hash: a6efa7ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d,PodSandboxId:d9947bce96b41e791b53bb7512e30401fa832d8f5dd28ea29b23adea21aceb6d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1698177545004286182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2fd1d4640becbebb7684c16ab4d880f,},Annotations:map[string]
string{io.kubernetes.container.hash: 9480edff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9,PodSandboxId:e668a7fc998df6093302d1d992e700257f49223f3af7ec06a2a3b66bbf390eb7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1698177544738319247,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21b1b3cc77ec96c7c17920c751192fef,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de,PodSandboxId:b1477f94e3df055d096accbf36ecbce228459dd9eaa06cc7103bdbde49a6bb38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,State:CONTAINER_EXITED,CreatedAt:1698177544517729718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2144d9ab57e21c8ce3e1a3983e6ed460,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750,PodSandboxId:76fc5dd9ae400f473f1b15ec6a074a4ec866a3b32428a20f024462ec4bb9beca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1698177544383195447,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-636215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d05616c6f89165d21cdd2d079b07b9,},Annotations:map[string]string{io.kubernetes.container.hash: a121b5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c0e438d-c4cd-492e-b042-1f8aca76a914 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6b62b080569aa       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   17 seconds ago       Running             kube-proxy                1                   0a01d81fa6d17       kube-proxy-d6wlp
	86e4b781ef0d2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   1                   43ac1a5b9c9da       coredns-5dd5756b68-nfdht
	fed486f99e6b9       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   19 seconds ago       Running             kube-scheduler            1                   1a785aea93bd8       kube-scheduler-pause-636215
	e3de53b518be4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   20 seconds ago       Running             etcd                      1                   52ce598b374b4       etcd-pause-636215
	ac86699cccee1       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   20 seconds ago       Running             kube-controller-manager   1                   fd5b5b1bcef69       kube-controller-manager-pause-636215
	2aa4a34794850       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   20 seconds ago       Running             kube-apiserver            1                   ed8308e2ad8cd       kube-apiserver-pause-636215
	f56526104e12e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   6ba69bfbf2709       coredns-5dd5756b68-nfdht
	b63ca9ca3b568       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                0                   bc7bc4404b7a9       kube-proxy-d6wlp
	d39b2fa0becbb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   2 minutes ago        Exited              etcd                      0                   d9947bce96b41       etcd-pause-636215
	516961cc67010       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   2 minutes ago        Exited              kube-controller-manager   0                   e668a7fc998df       kube-controller-manager-pause-636215
	2ca8fe6bdc353       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   2 minutes ago        Exited              kube-scheduler            0                   b1477f94e3df0       kube-scheduler-pause-636215
	6c34fb3e40fd4       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   2 minutes ago        Exited              kube-apiserver            0                   76fc5dd9ae400       kube-apiserver-pause-636215
	
	* 
	* ==> coredns [86e4b781ef0d2e7376c4b7211c30e04452a882306317e5e1a65adb000abdfa29] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58232 - 19053 "HINFO IN 2223351225957930505.8595811934903700078. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008637598s
	
	* 
	* ==> coredns [f56526104e12eda0b7a7745f83c7e49bda2e2efded0963685106dd3244659623] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-636215
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-636215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=pause-636215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_59_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:59:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-636215
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:01:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:59:32 +0000   Tue, 24 Oct 2023 19:59:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    pause-636215
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 0be8918910684939999ff169bd67f488
	  System UUID:                0be89189-1068-4939-999f-f169bd67f488
	  Boot ID:                    f988ab5d-0742-4b8b-8aaa-aadec0bdc029
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-nfdht                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     99s
	  kube-system                 etcd-pause-636215                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         113s
	  kube-system                 kube-apiserver-pause-636215             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-pause-636215    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-d6wlp                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-scheduler-pause-636215             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 96s   kube-proxy       
	  Normal  Starting                 14s   kube-proxy       
	  Normal  Starting                 113s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  113s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  113s  kubelet          Node pause-636215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s  kubelet          Node pause-636215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s  kubelet          Node pause-636215 status is now: NodeHasSufficientPID
	  Normal  NodeReady                113s  kubelet          Node pause-636215 status is now: NodeReady
	  Normal  RegisteredNode           100s  node-controller  Node pause-636215 event: Registered Node pause-636215 in Controller
	  Normal  RegisteredNode           2s    node-controller  Node pause-636215 event: Registered Node pause-636215 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076372] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.650739] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166061] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.324317] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.694535] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.098205] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.138338] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.136986] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.250709] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[Oct24 19:59] systemd-fstab-generator[931]: Ignoring "noauto" for root device
	[  +8.781282] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Oct24 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[ +30.331790] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.155781] systemd-fstab-generator[2112]: Ignoring "noauto" for root device
	[  +0.198113] systemd-fstab-generator[2126]: Ignoring "noauto" for root device
	[  +0.135747] systemd-fstab-generator[2137]: Ignoring "noauto" for root device
	[  +0.260948] systemd-fstab-generator[2160]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [d39b2fa0becbb01a311aec2ebf38ccd72596457a072295930bdc037f7a90c20d] <==
	* {"level":"info","ts":"2023-10-24T19:59:25.982868Z","caller":"traceutil/trace.go:171","msg":"trace[719258477] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"252.540535ms","start":"2023-10-24T19:59:25.730322Z","end":"2023-10-24T19:59:25.982862Z","steps":["trace[719258477] 'process raft request'  (duration: 248.007091ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:25.991003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.590116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-636215\" ","response":"range_response_count:1 size:5473"}
	{"level":"info","ts":"2023-10-24T19:59:25.991169Z","caller":"traceutil/trace.go:171","msg":"trace[359796920] range","detail":"{range_begin:/registry/minions/pause-636215; range_end:; response_count:1; response_revision:300; }","duration":"101.751882ms","start":"2023-10-24T19:59:25.889399Z","end":"2023-10-24T19:59:25.991151Z","steps":["trace[359796920] 'agreement among raft nodes before linearized reading'  (duration: 101.538803ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:25.99203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.472191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:59:25.992535Z","caller":"traceutil/trace.go:171","msg":"trace[1120418816] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:300; }","duration":"160.977866ms","start":"2023-10-24T19:59:25.831544Z","end":"2023-10-24T19:59:25.992522Z","steps":["trace[1120418816] 'agreement among raft nodes before linearized reading'  (duration: 160.363723ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:25.993633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.974188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2023-10-24T19:59:25.993711Z","caller":"traceutil/trace.go:171","msg":"trace[1138855935] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:300; }","duration":"260.055313ms","start":"2023-10-24T19:59:25.733646Z","end":"2023-10-24T19:59:25.993702Z","steps":["trace[1138855935] 'agreement among raft nodes before linearized reading'  (duration: 259.785364ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:59:50.096712Z","caller":"traceutil/trace.go:171","msg":"trace[1520055643] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"145.308771ms","start":"2023-10-24T19:59:49.951376Z","end":"2023-10-24T19:59:50.096685Z","steps":["trace[1520055643] 'process raft request'  (duration: 145.027712ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:50.836869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.386762ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16466157921071883274 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.169\" mod_revision:374 > success:<request_put:<key:\"/registry/masterleases/192.168.39.169\" value_size:67 lease:7242785884217107464 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.169\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-24T19:59:50.837205Z","caller":"traceutil/trace.go:171","msg":"trace[1295792892] linearizableReadLoop","detail":"{readStateIndex:397; appliedIndex:396; }","duration":"247.09247ms","start":"2023-10-24T19:59:50.590097Z","end":"2023-10-24T19:59:50.837189Z","steps":["trace[1295792892] 'read index received'  (duration: 116.516684ms)","trace[1295792892] 'applied index is now lower than readState.Index'  (duration: 130.573793ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:59:50.837269Z","caller":"traceutil/trace.go:171","msg":"trace[918706409] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"258.828922ms","start":"2023-10-24T19:59:50.578422Z","end":"2023-10-24T19:59:50.837251Z","steps":["trace[918706409] 'process raft request'  (duration: 128.206124ms)","trace[918706409] 'compare'  (duration: 129.185287ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:59:50.837601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.841492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-nfdht\" ","response":"range_response_count:1 size:4736"}
	{"level":"info","ts":"2023-10-24T19:59:50.837674Z","caller":"traceutil/trace.go:171","msg":"trace[1201623625] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-nfdht; range_end:; response_count:1; response_revision:378; }","duration":"238.915333ms","start":"2023-10-24T19:59:50.598749Z","end":"2023-10-24T19:59:50.837665Z","steps":["trace[1201623625] 'agreement among raft nodes before linearized reading'  (duration: 238.814289ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:59:50.837379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.365649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:59:50.837867Z","caller":"traceutil/trace.go:171","msg":"trace[366836922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:378; }","duration":"247.868101ms","start":"2023-10-24T19:59:50.589988Z","end":"2023-10-24T19:59:50.837857Z","steps":["trace[366836922] 'agreement among raft nodes before linearized reading'  (duration: 247.310218ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:00:31.582367Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-24T20:00:31.582471Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-636215","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"]}
	{"level":"warn","ts":"2023-10-24T20:00:31.582586Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T20:00:31.582672Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T20:00:31.676117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.169:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T20:00:31.676226Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.169:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-24T20:00:31.676406Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"88d17c48ad0ae483","current-leader-member-id":"88d17c48ad0ae483"}
	{"level":"info","ts":"2023-10-24T20:00:31.680307Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:31.680626Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:31.680662Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-636215","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"]}
	
	* 
	* ==> etcd [e3de53b518be4ebc9c45e73d110fac1fb4a131d3c07704d8a1e7bca47db01e4d] <==
	* {"level":"info","ts":"2023-10-24T20:00:47.819693Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T20:00:47.819719Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T20:00:47.820143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 switched to configuration voters=(9858797710873388163)"}
	{"level":"info","ts":"2023-10-24T20:00:47.822221Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T20:00:47.822805Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"88d17c48ad0ae483","initial-advertise-peer-urls":["https://192.168.39.169:2380"],"listen-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.169:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T20:00:47.82286Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T20:00:47.822254Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:47.822921Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2023-10-24T20:00:47.823447Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dd1030519101f266","local-member-id":"88d17c48ad0ae483","added-peer-id":"88d17c48ad0ae483","added-peer-peer-urls":["https://192.168.39.169:2380"]}
	{"level":"info","ts":"2023-10-24T20:00:47.823937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd1030519101f266","local-member-id":"88d17c48ad0ae483","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:00:47.824208Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:00:49.105254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T20:00:49.105328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T20:00:49.105353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 received MsgPreVoteResp from 88d17c48ad0ae483 at term 2"}
	{"level":"info","ts":"2023-10-24T20:00:49.105365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.105371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 received MsgVoteResp from 88d17c48ad0ae483 at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.105379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became leader at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.105386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 88d17c48ad0ae483 elected leader 88d17c48ad0ae483 at term 3"}
	{"level":"info","ts":"2023-10-24T20:00:49.107973Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"88d17c48ad0ae483","local-member-attributes":"{Name:pause-636215 ClientURLs:[https://192.168.39.169:2379]}","request-path":"/0/members/88d17c48ad0ae483/attributes","cluster-id":"dd1030519101f266","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T20:00:49.108007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:00:49.108361Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:00:49.109883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T20:00:49.11113Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T20:00:49.111258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T20:00:49.109883Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.169:2379"}
	
	* 
	* ==> kernel <==
	*  20:01:05 up 2 min,  0 users,  load average: 1.21, 0.53, 0.20
	Linux pause-636215 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2aa4a347948505f43e8148b61d57d36125e317e97e614d1dcc59a5cd8f5b7e09] <==
	* I1024 20:00:50.737461       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1024 20:00:50.737755       1 controller.go:134] Starting OpenAPI controller
	I1024 20:00:50.739241       1 controller.go:85] Starting OpenAPI V3 controller
	I1024 20:00:50.739310       1 naming_controller.go:291] Starting NamingConditionController
	I1024 20:00:50.739361       1 establishing_controller.go:76] Starting EstablishingController
	I1024 20:00:50.739414       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1024 20:00:50.739461       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1024 20:00:50.739504       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1024 20:00:50.852839       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 20:00:50.865581       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 20:00:50.926737       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1024 20:00:50.926780       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1024 20:00:50.932865       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 20:00:50.935439       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 20:00:50.935511       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 20:00:50.936774       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 20:00:50.937317       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 20:00:50.937368       1 aggregator.go:166] initial CRD sync complete...
	I1024 20:00:50.937379       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 20:00:50.937383       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 20:00:50.937388       1 cache.go:39] Caches are synced for autoregister controller
	E1024 20:00:50.962641       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1024 20:00:51.734333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 20:01:03.270388       1 controller.go:624] quota admission added evaluator for: endpoints
	I1024 20:01:03.344912       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [6c34fb3e40fd4a77f21208f6b00ed3964eedc4cc823fa8ca8439812444eb5750] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 20:00:31.615752       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 20:00:31.621017       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 20:00:31.621257       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [516961cc67010792d8b5eb63f9378e82b801664d09031dc5cf2abf43a52eeca9] <==
	* I1024 19:59:25.293434       1 shared_informer.go:318] Caches are synced for HPA
	I1024 19:59:25.366707       1 range_allocator.go:380] "Set node PodCIDR" node="pause-636215" podCIDRs=["10.244.0.0/24"]
	I1024 19:59:25.667471       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:59:25.687272       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:59:25.687336       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1024 19:59:26.001350       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1024 19:59:26.111332       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d6wlp"
	I1024 19:59:26.150916       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-g85kj"
	I1024 19:59:26.216675       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nfdht"
	I1024 19:59:26.247441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="261.438949ms"
	I1024 19:59:26.315411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.871532ms"
	I1024 19:59:26.315679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.162µs"
	I1024 19:59:26.347497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="230.911µs"
	I1024 19:59:26.371027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.517µs"
	I1024 19:59:26.432360       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1024 19:59:26.473170       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-g85kj"
	I1024 19:59:26.486330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.631057ms"
	I1024 19:59:26.497590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.20237ms"
	I1024 19:59:26.498528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="426.931µs"
	I1024 19:59:28.283702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.758µs"
	I1024 19:59:28.297410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.141µs"
	I1024 19:59:28.317683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.176µs"
	I1024 19:59:29.316702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="230.497µs"
	I1024 20:00:07.527835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.705044ms"
	I1024 20:00:07.528288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.241µs"
	
	* 
	* ==> kube-controller-manager [ac86699cccee17ed185d4823030d338b466254dfc003e340da78e44affe7a45f] <==
	* I1024 20:01:03.267129       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1024 20:01:03.267333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.844µs"
	I1024 20:01:03.291550       1 shared_informer.go:318] Caches are synced for HPA
	I1024 20:01:03.295554       1 shared_informer.go:318] Caches are synced for stateful set
	I1024 20:01:03.300473       1 shared_informer.go:318] Caches are synced for node
	I1024 20:01:03.300804       1 range_allocator.go:174] "Sending events to api server"
	I1024 20:01:03.300931       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1024 20:01:03.300958       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1024 20:01:03.301180       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1024 20:01:03.301308       1 shared_informer.go:318] Caches are synced for crt configmap
	I1024 20:01:03.302588       1 shared_informer.go:318] Caches are synced for GC
	I1024 20:01:03.304833       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1024 20:01:03.307141       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1024 20:01:03.309687       1 shared_informer.go:318] Caches are synced for TTL
	I1024 20:01:03.311107       1 shared_informer.go:318] Caches are synced for deployment
	I1024 20:01:03.312451       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1024 20:01:03.324785       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1024 20:01:03.330521       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1024 20:01:03.332877       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1024 20:01:03.334532       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1024 20:01:03.363375       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 20:01:03.380821       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 20:01:03.794295       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 20:01:03.794393       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1024 20:01:03.841176       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [6b62b080569aa4c4c4d345bcaf4a772901e8d959d5b4b70b1be6a650693b7081] <==
	* I1024 20:00:48.272795       1 server_others.go:69] "Using iptables proxy"
	I1024 20:00:50.883442       1 node.go:141] Successfully retrieved node IP: 192.168.39.169
	I1024 20:00:51.016683       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:00:51.016769       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:00:51.021518       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:00:51.021703       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:00:51.022015       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:00:51.022998       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:00:51.025178       1 config.go:188] "Starting service config controller"
	I1024 20:00:51.025463       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:00:51.025615       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:00:51.025758       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:00:51.029199       1 config.go:315] "Starting node config controller"
	I1024 20:00:51.029254       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:00:51.126544       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:00:51.126680       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:00:51.129351       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b63ca9ca3b568a93fc8679b8d04e214fca6e218cad40f0de3a541ba015f40c27] <==
	* I1024 19:59:28.729311       1 server_others.go:69] "Using iptables proxy"
	I1024 19:59:28.759363       1 node.go:141] Successfully retrieved node IP: 192.168.39.169
	I1024 19:59:28.855830       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 19:59:28.855930       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 19:59:28.862019       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:59:28.862974       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:59:28.863463       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:59:28.863516       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:59:28.866778       1 config.go:188] "Starting service config controller"
	I1024 19:59:28.867497       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:59:28.867578       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:59:28.867588       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:59:28.871305       1 config.go:315] "Starting node config controller"
	I1024 19:59:28.871348       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:59:28.968314       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:59:28.968641       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:59:28.971841       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2ca8fe6bdc353bdddf1154d3696053cc3a2198afbd08c0f20ad0a120a10073de] <==
	* E1024 19:59:08.791366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:59:08.791199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:59:08.791376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 19:59:09.610574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:59:09.610679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:59:09.724738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:59:09.724834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:59:09.746379       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:59:09.746464       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:59:09.752393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:59:09.752448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1024 19:59:09.842296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:59:09.842519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:59:09.928113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:59:09.928199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:59:09.940578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:59:09.940677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1024 19:59:10.007610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 19:59:10.007706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1024 19:59:10.018588       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:59:10.018668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1024 19:59:10.081260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:59:10.081338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1024 19:59:11.474927       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1024 20:00:31.598813       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [fed486f99e6b90003e75e2da908baaeeb6c72708ba187ef2bd87c92b72bc4de5] <==
	* I1024 20:00:48.019627       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:00:50.824137       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:00:50.824273       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:00:50.824288       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:00:50.824295       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:00:50.882876       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:00:50.882999       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:00:50.885784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:00:50.886495       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:00:50.886629       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:00:50.886757       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:00:50.987005       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:58:39 UTC, ends at Tue 2023-10-24 20:01:06 UTC. --
	Oct 24 20:00:42 pause-636215 kubelet[1272]: I1024 20:00:42.171253    1272 status_manager.go:853] "Failed to get status for pod" podUID="21b1b3cc77ec96c7c17920c751192fef" pod="kube-system/kube-controller-manager-pause-636215" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-636215\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: I1024 20:00:42.171431    1272 status_manager.go:853] "Failed to get status for pod" podUID="45d05616c6f89165d21cdd2d079b07b9" pod="kube-system/kube-apiserver-pause-636215" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-636215\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.306263    1272 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.306435    1272 kubelet.go:2840] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.333877    1272 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.334131    1272 container_log_manager.go:185] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.523417    1272 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="nil"
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.523463    1272 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.523482    1272 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 24 20:00:42 pause-636215 kubelet[1272]: E1024 20:00:42.645891    1272 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-pause-636215.1791238f04673d64", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-pause-636215", UID:"45d05616c6f89165d21cdd2d079b07b9", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://192.168.39.169:8443/readyz\": EOF", Source:v1.EventSource
{Component:"kubelet", Host:"pause-636215"}, FirstTimestamp:time.Date(2023, time.October, 24, 20, 0, 31, 656557924, time.Local), LastTimestamp:time.Date(2023, time.October, 24, 20, 0, 31, 656557924, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-636215"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.39.169:8443: connect: connection refused'(may retry after sleeping)
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.534992    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a34640eb72ebfb29982cd29c5c46478a4ec1ab2117060277038ef7583327008"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.556629    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0add9ee1f3204aea13f669b6997dfb8d915e188da85cabe014395dd9656fa928"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.578370    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aebd77244f5223d392efa1a63157baf9aea9029ad82d8cc4ac77d877e826758"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.587341    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d174073fc1dcdef00d0adc5a3321dfa5d8e9f8fb9f6eaf79fe527375154aa21"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.601993    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f507dc38a3ef393b26695ff082e7d5f54ae74985848113ed46ed84acf153079"
	Oct 24 20:00:43 pause-636215 kubelet[1272]: I1024 20:00:43.641295    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e828ea9dd44b4316d065027f85e3acef67995c0117f532e2c0bdd8c4a0edc8a4"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.383859    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.384365    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.384677    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.384972    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.385301    1272 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-636215\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused"
	Oct 24 20:00:44 pause-636215 kubelet[1272]: E1024 20:00:44.385353    1272 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Oct 24 20:00:46 pause-636215 kubelet[1272]: E1024 20:00:46.225957    1272 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-636215?timeout=10s\": dial tcp 192.168.39.169:8443: connect: connection refused" interval="7s"
	Oct 24 20:00:50 pause-636215 kubelet[1272]: E1024 20:00:50.779811    1272 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 24 20:00:50 pause-636215 kubelet[1272]: E1024 20:00:50.779960    1272 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-636215 -n pause-636215
helpers_test.go:261: (dbg) Run:  kubectl --context pause-636215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (57.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-014826 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-014826 --alsologtostderr -v=3: exit status 82 (2m1.486970436s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-014826"  ...
	* Stopping node "no-preload-014826"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:03:55.774773   48270 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:03:55.774863   48270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:03:55.774872   48270 out.go:309] Setting ErrFile to fd 2...
	I1024 20:03:55.774876   48270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:03:55.775037   48270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:03:55.775244   48270 out.go:303] Setting JSON to false
	I1024 20:03:55.775315   48270 mustload.go:65] Loading cluster: no-preload-014826
	I1024 20:03:55.775644   48270 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:03:55.775703   48270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/config.json ...
	I1024 20:03:55.775856   48270 mustload.go:65] Loading cluster: no-preload-014826
	I1024 20:03:55.775951   48270 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:03:55.775975   48270 stop.go:39] StopHost: no-preload-014826
	I1024 20:03:55.776319   48270 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:03:55.776366   48270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:03:55.792236   48270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1024 20:03:55.792673   48270 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:03:55.793271   48270 main.go:141] libmachine: Using API Version  1
	I1024 20:03:55.793312   48270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:03:55.793655   48270 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:03:55.795955   48270 out.go:177] * Stopping node "no-preload-014826"  ...
	I1024 20:03:55.797205   48270 main.go:141] libmachine: Stopping "no-preload-014826"...
	I1024 20:03:55.797226   48270 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:03:55.798818   48270 main.go:141] libmachine: (no-preload-014826) Calling .Stop
	I1024 20:03:55.802025   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 0/60
	I1024 20:03:56.803538   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 1/60
	I1024 20:03:57.804968   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 2/60
	I1024 20:03:58.806230   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 3/60
	I1024 20:03:59.807627   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 4/60
	I1024 20:04:00.809559   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 5/60
	I1024 20:04:01.810936   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 6/60
	I1024 20:04:02.812230   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 7/60
	I1024 20:04:03.814131   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 8/60
	I1024 20:04:04.816514   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 9/60
	I1024 20:04:05.817999   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 10/60
	I1024 20:04:06.820351   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 11/60
	I1024 20:04:07.821937   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 12/60
	I1024 20:04:08.823856   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 13/60
	I1024 20:04:09.825457   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 14/60
	I1024 20:04:10.827389   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 15/60
	I1024 20:04:11.828790   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 16/60
	I1024 20:04:12.830177   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 17/60
	I1024 20:04:13.831884   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 18/60
	I1024 20:04:14.833197   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 19/60
	I1024 20:04:15.835343   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 20/60
	I1024 20:04:16.837154   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 21/60
	I1024 20:04:17.838847   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 22/60
	I1024 20:04:18.840102   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 23/60
	I1024 20:04:19.841500   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 24/60
	I1024 20:04:20.843587   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 25/60
	I1024 20:04:21.845141   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 26/60
	I1024 20:04:22.846627   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 27/60
	I1024 20:04:23.848803   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 28/60
	I1024 20:04:24.850557   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 29/60
	I1024 20:04:25.852220   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 30/60
	I1024 20:04:26.854281   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 31/60
	I1024 20:04:27.855840   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 32/60
	I1024 20:04:28.857484   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 33/60
	I1024 20:04:29.858979   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 34/60
	I1024 20:04:30.861196   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 35/60
	I1024 20:04:31.862753   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 36/60
	I1024 20:04:32.864324   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 37/60
	I1024 20:04:33.865805   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 38/60
	I1024 20:04:34.868127   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 39/60
	I1024 20:04:35.870340   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 40/60
	I1024 20:04:36.872130   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 41/60
	I1024 20:04:37.873692   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 42/60
	I1024 20:04:38.875460   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 43/60
	I1024 20:04:39.876918   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 44/60
	I1024 20:04:40.878538   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 45/60
	I1024 20:04:41.880581   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 46/60
	I1024 20:04:42.882104   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 47/60
	I1024 20:04:43.884166   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 48/60
	I1024 20:04:44.885529   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 49/60
	I1024 20:04:45.887414   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 50/60
	I1024 20:04:46.888622   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 51/60
	I1024 20:04:47.890510   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 52/60
	I1024 20:04:48.891835   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 53/60
	I1024 20:04:49.892974   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 54/60
	I1024 20:04:50.894937   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 55/60
	I1024 20:04:51.896432   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 56/60
	I1024 20:04:52.898276   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 57/60
	I1024 20:04:53.899492   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 58/60
	I1024 20:04:54.900903   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 59/60
	I1024 20:04:55.902169   48270 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:04:55.902235   48270 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:04:55.902254   48270 retry.go:31] will retry after 1.171559627s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:04:57.074560   48270 stop.go:39] StopHost: no-preload-014826
	I1024 20:04:57.074912   48270 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:04:57.074965   48270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:04:57.089010   48270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I1024 20:04:57.089535   48270 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:04:57.090107   48270 main.go:141] libmachine: Using API Version  1
	I1024 20:04:57.090126   48270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:04:57.090449   48270 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:04:57.092564   48270 out.go:177] * Stopping node "no-preload-014826"  ...
	I1024 20:04:57.093881   48270 main.go:141] libmachine: Stopping "no-preload-014826"...
	I1024 20:04:57.093897   48270 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:04:57.095540   48270 main.go:141] libmachine: (no-preload-014826) Calling .Stop
	I1024 20:04:57.099051   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 0/60
	I1024 20:04:58.100569   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 1/60
	I1024 20:04:59.102004   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 2/60
	I1024 20:05:00.104063   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 3/60
	I1024 20:05:01.105734   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 4/60
	I1024 20:05:02.107821   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 5/60
	I1024 20:05:03.109272   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 6/60
	I1024 20:05:04.110517   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 7/60
	I1024 20:05:05.112016   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 8/60
	I1024 20:05:06.114045   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 9/60
	I1024 20:05:07.116430   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 10/60
	I1024 20:05:08.117894   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 11/60
	I1024 20:05:09.119233   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 12/60
	I1024 20:05:10.120634   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 13/60
	I1024 20:05:11.122051   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 14/60
	I1024 20:05:12.123789   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 15/60
	I1024 20:05:13.125020   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 16/60
	I1024 20:05:14.126686   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 17/60
	I1024 20:05:15.128005   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 18/60
	I1024 20:05:16.129607   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 19/60
	I1024 20:05:17.131320   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 20/60
	I1024 20:05:18.133432   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 21/60
	I1024 20:05:19.134747   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 22/60
	I1024 20:05:20.136260   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 23/60
	I1024 20:05:21.138500   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 24/60
	I1024 20:05:22.140066   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 25/60
	I1024 20:05:23.141465   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 26/60
	I1024 20:05:24.142859   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 27/60
	I1024 20:05:25.144299   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 28/60
	I1024 20:05:26.146057   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 29/60
	I1024 20:05:27.148318   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 30/60
	I1024 20:05:28.149936   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 31/60
	I1024 20:05:29.151612   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 32/60
	I1024 20:05:30.153074   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 33/60
	I1024 20:05:31.154778   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 34/60
	I1024 20:05:32.156468   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 35/60
	I1024 20:05:33.157912   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 36/60
	I1024 20:05:34.159895   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 37/60
	I1024 20:05:35.161376   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 38/60
	I1024 20:05:36.162837   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 39/60
	I1024 20:05:37.165137   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 40/60
	I1024 20:05:38.166308   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 41/60
	I1024 20:05:39.167956   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 42/60
	I1024 20:05:40.169638   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 43/60
	I1024 20:05:41.170955   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 44/60
	I1024 20:05:42.172848   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 45/60
	I1024 20:05:43.174145   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 46/60
	I1024 20:05:44.176367   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 47/60
	I1024 20:05:45.177902   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 48/60
	I1024 20:05:46.179902   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 49/60
	I1024 20:05:47.181609   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 50/60
	I1024 20:05:48.183072   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 51/60
	I1024 20:05:49.184540   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 52/60
	I1024 20:05:50.185996   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 53/60
	I1024 20:05:51.187344   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 54/60
	I1024 20:05:52.189020   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 55/60
	I1024 20:05:53.190238   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 56/60
	I1024 20:05:54.191728   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 57/60
	I1024 20:05:55.192967   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 58/60
	I1024 20:05:56.194385   48270 main.go:141] libmachine: (no-preload-014826) Waiting for machine to stop 59/60
	I1024 20:05:57.194969   48270 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:05:57.195015   48270 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:05:57.197161   48270 out.go:177] 
	W1024 20:05:57.198877   48270 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1024 20:05:57.198897   48270 out.go:239] * 
	* 
	W1024 20:05:57.202094   48270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:05:57.203529   48270 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-014826 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826
E1024 20:06:00.584715   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826: exit status 3 (18.628447432s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:06:15.833555   48836 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host
	E1024 20:06:15.833574   48836 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-014826" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-867165 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-867165 --alsologtostderr -v=3: exit status 82 (2m1.775941854s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-867165"  ...
	* Stopping node "embed-certs-867165"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:04:06.182910   48385 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:04:06.183016   48385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:04:06.183031   48385 out.go:309] Setting ErrFile to fd 2...
	I1024 20:04:06.183036   48385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:04:06.183197   48385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:04:06.183402   48385 out.go:303] Setting JSON to false
	I1024 20:04:06.183474   48385 mustload.go:65] Loading cluster: embed-certs-867165
	I1024 20:04:06.183824   48385 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:04:06.183895   48385 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/config.json ...
	I1024 20:04:06.184056   48385 mustload.go:65] Loading cluster: embed-certs-867165
	I1024 20:04:06.184156   48385 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:04:06.184178   48385 stop.go:39] StopHost: embed-certs-867165
	I1024 20:04:06.184568   48385 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:04:06.184611   48385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:04:06.199312   48385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I1024 20:04:06.199744   48385 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:04:06.200448   48385 main.go:141] libmachine: Using API Version  1
	I1024 20:04:06.200472   48385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:04:06.200837   48385 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:04:06.203236   48385 out.go:177] * Stopping node "embed-certs-867165"  ...
	I1024 20:04:06.205150   48385 main.go:141] libmachine: Stopping "embed-certs-867165"...
	I1024 20:04:06.205181   48385 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:04:06.206850   48385 main.go:141] libmachine: (embed-certs-867165) Calling .Stop
	I1024 20:04:06.217753   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 0/60
	I1024 20:04:07.220151   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 1/60
	I1024 20:04:08.222238   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 2/60
	I1024 20:04:09.224338   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 3/60
	I1024 20:04:10.225834   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 4/60
	I1024 20:04:11.227823   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 5/60
	I1024 20:04:12.229418   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 6/60
	I1024 20:04:13.230777   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 7/60
	I1024 20:04:14.232280   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 8/60
	I1024 20:04:15.233602   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 9/60
	I1024 20:04:16.235692   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 10/60
	I1024 20:04:17.237232   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 11/60
	I1024 20:04:18.238586   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 12/60
	I1024 20:04:19.240227   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 13/60
	I1024 20:04:20.241763   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 14/60
	I1024 20:04:21.244034   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 15/60
	I1024 20:04:22.245550   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 16/60
	I1024 20:04:23.248072   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 17/60
	I1024 20:04:24.249354   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 18/60
	I1024 20:04:25.250527   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 19/60
	I1024 20:04:26.252601   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 20/60
	I1024 20:04:27.255273   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 21/60
	I1024 20:04:28.257694   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 22/60
	I1024 20:04:29.259747   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 23/60
	I1024 20:04:30.261434   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 24/60
	I1024 20:04:31.262986   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 25/60
	I1024 20:04:32.264377   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 26/60
	I1024 20:04:33.265681   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 27/60
	I1024 20:04:34.267696   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 28/60
	I1024 20:04:35.269414   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 29/60
	I1024 20:04:36.271655   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 30/60
	I1024 20:04:37.272987   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 31/60
	I1024 20:04:38.274305   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 32/60
	I1024 20:04:39.275713   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 33/60
	I1024 20:04:40.277019   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 34/60
	I1024 20:04:41.278738   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 35/60
	I1024 20:04:42.280072   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 36/60
	I1024 20:04:43.281354   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 37/60
	I1024 20:04:44.282678   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 38/60
	I1024 20:04:45.284552   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 39/60
	I1024 20:04:46.286458   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 40/60
	I1024 20:04:47.287778   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 41/60
	I1024 20:04:48.289190   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 42/60
	I1024 20:04:49.290437   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 43/60
	I1024 20:04:50.291859   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 44/60
	I1024 20:04:51.293802   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 45/60
	I1024 20:04:52.295161   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 46/60
	I1024 20:04:53.296880   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 47/60
	I1024 20:04:54.298407   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 48/60
	I1024 20:04:55.299857   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 49/60
	I1024 20:04:56.301888   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 50/60
	I1024 20:04:57.303497   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 51/60
	I1024 20:04:58.305516   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 52/60
	I1024 20:04:59.306897   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 53/60
	I1024 20:05:00.308760   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 54/60
	I1024 20:05:01.310894   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 55/60
	I1024 20:05:02.312479   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 56/60
	I1024 20:05:03.313940   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 57/60
	I1024 20:05:04.316561   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 58/60
	I1024 20:05:05.317925   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 59/60
	I1024 20:05:06.319009   48385 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:05:06.319055   48385 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:05:06.319089   48385 retry.go:31] will retry after 1.450369733s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:05:07.769602   48385 stop.go:39] StopHost: embed-certs-867165
	I1024 20:05:07.769938   48385 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:05:07.769997   48385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:05:07.785413   48385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I1024 20:05:07.785819   48385 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:05:07.786305   48385 main.go:141] libmachine: Using API Version  1
	I1024 20:05:07.786329   48385 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:05:07.786695   48385 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:05:07.789864   48385 out.go:177] * Stopping node "embed-certs-867165"  ...
	I1024 20:05:07.791248   48385 main.go:141] libmachine: Stopping "embed-certs-867165"...
	I1024 20:05:07.791262   48385 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:05:07.792757   48385 main.go:141] libmachine: (embed-certs-867165) Calling .Stop
	I1024 20:05:07.796134   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 0/60
	I1024 20:05:08.797583   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 1/60
	I1024 20:05:09.800082   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 2/60
	I1024 20:05:10.801555   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 3/60
	I1024 20:05:11.803794   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 4/60
	I1024 20:05:12.805786   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 5/60
	I1024 20:05:13.807030   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 6/60
	I1024 20:05:14.808426   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 7/60
	I1024 20:05:15.809703   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 8/60
	I1024 20:05:16.811690   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 9/60
	I1024 20:05:17.813714   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 10/60
	I1024 20:05:18.815786   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 11/60
	I1024 20:05:19.817691   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 12/60
	I1024 20:05:20.818937   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 13/60
	I1024 20:05:21.821096   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 14/60
	I1024 20:05:22.823685   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 15/60
	I1024 20:05:23.825078   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 16/60
	I1024 20:05:24.826640   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 17/60
	I1024 20:05:25.828083   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 18/60
	I1024 20:05:26.829475   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 19/60
	I1024 20:05:27.831011   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 20/60
	I1024 20:05:28.832204   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 21/60
	I1024 20:05:29.833800   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 22/60
	I1024 20:05:30.835919   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 23/60
	I1024 20:05:31.837477   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 24/60
	I1024 20:05:32.839316   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 25/60
	I1024 20:05:33.840561   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 26/60
	I1024 20:05:34.841820   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 27/60
	I1024 20:05:35.843095   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 28/60
	I1024 20:05:36.844428   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 29/60
	I1024 20:05:37.846350   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 30/60
	I1024 20:05:38.847376   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 31/60
	I1024 20:05:39.848629   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 32/60
	I1024 20:05:40.849962   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 33/60
	I1024 20:05:41.851108   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 34/60
	I1024 20:05:42.853385   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 35/60
	I1024 20:05:43.854796   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 36/60
	I1024 20:05:44.856098   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 37/60
	I1024 20:05:45.857614   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 38/60
	I1024 20:05:46.858771   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 39/60
	I1024 20:05:47.860857   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 40/60
	I1024 20:05:48.861904   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 41/60
	I1024 20:05:49.863679   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 42/60
	I1024 20:05:50.864869   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 43/60
	I1024 20:05:51.866263   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 44/60
	I1024 20:05:52.867562   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 45/60
	I1024 20:05:53.869022   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 46/60
	I1024 20:05:54.870458   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 47/60
	I1024 20:05:55.871737   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 48/60
	I1024 20:05:56.872987   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 49/60
	I1024 20:05:57.874536   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 50/60
	I1024 20:05:58.875816   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 51/60
	I1024 20:05:59.877245   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 52/60
	I1024 20:06:00.878782   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 53/60
	I1024 20:06:01.880041   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 54/60
	I1024 20:06:02.881503   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 55/60
	I1024 20:06:03.882898   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 56/60
	I1024 20:06:04.884207   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 57/60
	I1024 20:06:05.885513   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 58/60
	I1024 20:06:06.886773   48385 main.go:141] libmachine: (embed-certs-867165) Waiting for machine to stop 59/60
	I1024 20:06:07.887508   48385 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:06:07.887547   48385 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:06:07.889560   48385 out.go:177] 
	W1024 20:06:07.890982   48385 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1024 20:06:07.891000   48385 out.go:239] * 
	* 
	W1024 20:06:07.893872   48385 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:06:07.895428   48385 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-867165 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165: exit status 3 (18.432503679s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:06:26.329604   48890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host
	E1024 20:06:26.329622   48890 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-867165" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-643126 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-643126 --alsologtostderr -v=3: exit status 82 (2m1.761882537s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-643126"  ...
	* Stopping node "default-k8s-diff-port-643126"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:05:30.765843   48751 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:05:30.766070   48751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:05:30.766078   48751 out.go:309] Setting ErrFile to fd 2...
	I1024 20:05:30.766083   48751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:05:30.766252   48751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:05:30.766480   48751 out.go:303] Setting JSON to false
	I1024 20:05:30.766555   48751 mustload.go:65] Loading cluster: default-k8s-diff-port-643126
	I1024 20:05:30.766892   48751 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:05:30.766952   48751 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/config.json ...
	I1024 20:05:30.767127   48751 mustload.go:65] Loading cluster: default-k8s-diff-port-643126
	I1024 20:05:30.767229   48751 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:05:30.767252   48751 stop.go:39] StopHost: default-k8s-diff-port-643126
	I1024 20:05:30.767598   48751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:05:30.767656   48751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:05:30.782348   48751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I1024 20:05:30.782859   48751 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:05:30.783497   48751 main.go:141] libmachine: Using API Version  1
	I1024 20:05:30.783524   48751 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:05:30.783917   48751 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:05:30.786129   48751 out.go:177] * Stopping node "default-k8s-diff-port-643126"  ...
	I1024 20:05:30.787834   48751 main.go:141] libmachine: Stopping "default-k8s-diff-port-643126"...
	I1024 20:05:30.787861   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:05:30.789721   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Stop
	I1024 20:05:30.793199   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 0/60
	I1024 20:05:31.795077   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 1/60
	I1024 20:05:32.796386   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 2/60
	I1024 20:05:33.797862   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 3/60
	I1024 20:05:34.799314   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 4/60
	I1024 20:05:35.801491   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 5/60
	I1024 20:05:36.803211   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 6/60
	I1024 20:05:37.804731   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 7/60
	I1024 20:05:38.806008   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 8/60
	I1024 20:05:39.807544   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 9/60
	I1024 20:05:40.809615   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 10/60
	I1024 20:05:41.811020   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 11/60
	I1024 20:05:42.812384   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 12/60
	I1024 20:05:43.813827   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 13/60
	I1024 20:05:44.815892   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 14/60
	I1024 20:05:45.817161   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 15/60
	I1024 20:05:46.818545   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 16/60
	I1024 20:05:47.820035   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 17/60
	I1024 20:05:48.821406   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 18/60
	I1024 20:05:49.822779   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 19/60
	I1024 20:05:50.824931   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 20/60
	I1024 20:05:51.826964   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 21/60
	I1024 20:05:52.828382   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 22/60
	I1024 20:05:53.829785   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 23/60
	I1024 20:05:54.831057   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 24/60
	I1024 20:05:55.832918   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 25/60
	I1024 20:05:56.834260   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 26/60
	I1024 20:05:57.835485   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 27/60
	I1024 20:05:58.836861   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 28/60
	I1024 20:05:59.838438   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 29/60
	I1024 20:06:00.840463   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 30/60
	I1024 20:06:01.841744   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 31/60
	I1024 20:06:02.842919   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 32/60
	I1024 20:06:03.844407   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 33/60
	I1024 20:06:04.845719   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 34/60
	I1024 20:06:05.847716   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 35/60
	I1024 20:06:06.849216   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 36/60
	I1024 20:06:07.850436   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 37/60
	I1024 20:06:08.852006   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 38/60
	I1024 20:06:09.853428   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 39/60
	I1024 20:06:10.855382   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 40/60
	I1024 20:06:11.856755   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 41/60
	I1024 20:06:12.858344   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 42/60
	I1024 20:06:13.859975   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 43/60
	I1024 20:06:14.861360   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 44/60
	I1024 20:06:15.863071   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 45/60
	I1024 20:06:16.864489   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 46/60
	I1024 20:06:17.866150   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 47/60
	I1024 20:06:18.867611   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 48/60
	I1024 20:06:19.869603   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 49/60
	I1024 20:06:20.871544   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 50/60
	I1024 20:06:21.873030   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 51/60
	I1024 20:06:22.874383   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 52/60
	I1024 20:06:23.875749   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 53/60
	I1024 20:06:24.877135   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 54/60
	I1024 20:06:25.878782   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 55/60
	I1024 20:06:26.880243   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 56/60
	I1024 20:06:27.881689   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 57/60
	I1024 20:06:28.883125   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 58/60
	I1024 20:06:29.884648   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 59/60
	I1024 20:06:30.885853   48751 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:06:30.885901   48751 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:06:30.885934   48751 retry.go:31] will retry after 1.460781976s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:06:32.347521   48751 stop.go:39] StopHost: default-k8s-diff-port-643126
	I1024 20:06:32.347956   48751 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:06:32.348011   48751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:06:32.362152   48751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I1024 20:06:32.362584   48751 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:06:32.363080   48751 main.go:141] libmachine: Using API Version  1
	I1024 20:06:32.363103   48751 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:06:32.363413   48751 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:06:32.365546   48751 out.go:177] * Stopping node "default-k8s-diff-port-643126"  ...
	I1024 20:06:32.367047   48751 main.go:141] libmachine: Stopping "default-k8s-diff-port-643126"...
	I1024 20:06:32.367064   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:06:32.368673   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Stop
	I1024 20:06:32.371916   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 0/60
	I1024 20:06:33.373269   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 1/60
	I1024 20:06:34.374585   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 2/60
	I1024 20:06:35.375757   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 3/60
	I1024 20:06:36.377115   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 4/60
	I1024 20:06:37.379109   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 5/60
	I1024 20:06:38.380694   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 6/60
	I1024 20:06:39.382044   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 7/60
	I1024 20:06:40.383552   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 8/60
	I1024 20:06:41.384867   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 9/60
	I1024 20:06:42.386760   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 10/60
	I1024 20:06:43.388168   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 11/60
	I1024 20:06:44.389577   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 12/60
	I1024 20:06:45.390788   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 13/60
	I1024 20:06:46.392011   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 14/60
	I1024 20:06:47.393711   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 15/60
	I1024 20:06:48.395536   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 16/60
	I1024 20:06:49.396921   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 17/60
	I1024 20:06:50.398532   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 18/60
	I1024 20:06:51.400771   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 19/60
	I1024 20:06:52.402974   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 20/60
	I1024 20:06:53.404079   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 21/60
	I1024 20:06:54.405428   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 22/60
	I1024 20:06:55.406773   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 23/60
	I1024 20:06:56.408338   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 24/60
	I1024 20:06:57.410221   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 25/60
	I1024 20:06:58.411670   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 26/60
	I1024 20:06:59.413177   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 27/60
	I1024 20:07:00.414641   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 28/60
	I1024 20:07:01.415822   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 29/60
	I1024 20:07:02.417601   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 30/60
	I1024 20:07:03.418902   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 31/60
	I1024 20:07:04.420211   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 32/60
	I1024 20:07:05.421599   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 33/60
	I1024 20:07:06.422822   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 34/60
	I1024 20:07:07.424615   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 35/60
	I1024 20:07:08.425890   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 36/60
	I1024 20:07:09.427489   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 37/60
	I1024 20:07:10.428891   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 38/60
	I1024 20:07:11.430629   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 39/60
	I1024 20:07:12.432320   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 40/60
	I1024 20:07:13.433645   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 41/60
	I1024 20:07:14.434862   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 42/60
	I1024 20:07:15.436256   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 43/60
	I1024 20:07:16.437604   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 44/60
	I1024 20:07:17.439423   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 45/60
	I1024 20:07:18.440898   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 46/60
	I1024 20:07:19.442214   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 47/60
	I1024 20:07:20.443690   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 48/60
	I1024 20:07:21.445030   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 49/60
	I1024 20:07:22.446948   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 50/60
	I1024 20:07:23.448277   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 51/60
	I1024 20:07:24.449721   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 52/60
	I1024 20:07:25.451091   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 53/60
	I1024 20:07:26.452600   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 54/60
	I1024 20:07:27.454296   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 55/60
	I1024 20:07:28.455715   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 56/60
	I1024 20:07:29.457232   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 57/60
	I1024 20:07:30.458634   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 58/60
	I1024 20:07:31.460002   48751 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for machine to stop 59/60
	I1024 20:07:32.460728   48751 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:07:32.460767   48751 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:07:32.462915   48751 out.go:177] 
	W1024 20:07:32.464465   48751 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1024 20:07:32.464483   48751 out.go:239] * 
	* 
	W1024 20:07:32.467461   48751 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:07:32.469051   48751 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-643126 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126: exit status 3 (18.594861701s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:07:51.065585   49524 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	E1024 20:07:51.065606   49524 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-643126" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826: exit status 3 (3.167945198s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:06:19.001644   48931 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host
	E1024 20:06:19.001671   48931 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-014826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-014826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154314727s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-014826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826: exit status 3 (3.061667149s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:06:28.217674   49000 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host
	E1024 20:06:28.217698   49000 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-014826" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165: exit status 3 (3.168273403s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:06:29.497640   49030 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host
	E1024 20:06:29.497665   49030 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-867165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-867165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15298555s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-867165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165: exit status 3 (3.062617026s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:06:38.713678   49157 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host
	E1024 20:06:38.713703   49157 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-867165" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-467375 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-467375 --alsologtostderr -v=3: exit status 82 (2m0.931738751s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-467375"  ...
	* Stopping node "old-k8s-version-467375"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:07:00.851717   49378 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:07:00.851833   49378 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:07:00.851844   49378 out.go:309] Setting ErrFile to fd 2...
	I1024 20:07:00.851851   49378 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:07:00.852021   49378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:07:00.852280   49378 out.go:303] Setting JSON to false
	I1024 20:07:00.852395   49378 mustload.go:65] Loading cluster: old-k8s-version-467375
	I1024 20:07:00.852731   49378 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:07:00.852810   49378 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:07:00.852995   49378 mustload.go:65] Loading cluster: old-k8s-version-467375
	I1024 20:07:00.853125   49378 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:07:00.853168   49378 stop.go:39] StopHost: old-k8s-version-467375
	I1024 20:07:00.853598   49378 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:07:00.853667   49378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:07:00.868134   49378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I1024 20:07:00.868582   49378 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:07:00.869163   49378 main.go:141] libmachine: Using API Version  1
	I1024 20:07:00.869189   49378 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:07:00.869590   49378 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:07:00.872206   49378 out.go:177] * Stopping node "old-k8s-version-467375"  ...
	I1024 20:07:00.873648   49378 main.go:141] libmachine: Stopping "old-k8s-version-467375"...
	I1024 20:07:00.873664   49378 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:07:00.875284   49378 main.go:141] libmachine: (old-k8s-version-467375) Calling .Stop
	I1024 20:07:00.878661   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 0/60
	I1024 20:07:01.880969   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 1/60
	I1024 20:07:02.882298   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 2/60
	I1024 20:07:03.883617   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 3/60
	I1024 20:07:04.884964   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 4/60
	I1024 20:07:05.887347   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 5/60
	I1024 20:07:06.888755   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 6/60
	I1024 20:07:07.890260   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 7/60
	I1024 20:07:08.891868   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 8/60
	I1024 20:07:09.893393   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 9/60
	I1024 20:07:10.894715   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 10/60
	I1024 20:07:11.896222   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 11/60
	I1024 20:07:12.897616   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 12/60
	I1024 20:07:13.899144   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 13/60
	I1024 20:07:14.900599   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 14/60
	I1024 20:07:15.902452   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 15/60
	I1024 20:07:16.903927   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 16/60
	I1024 20:07:17.905983   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 17/60
	I1024 20:07:18.907331   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 18/60
	I1024 20:07:19.908735   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 19/60
	I1024 20:07:20.911342   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 20/60
	I1024 20:07:21.912699   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 21/60
	I1024 20:07:22.914036   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 22/60
	I1024 20:07:23.915397   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 23/60
	I1024 20:07:24.916716   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 24/60
	I1024 20:07:25.918748   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 25/60
	I1024 20:07:26.920154   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 26/60
	I1024 20:07:27.921619   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 27/60
	I1024 20:07:28.923100   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 28/60
	I1024 20:07:29.924594   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 29/60
	I1024 20:07:30.926661   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 30/60
	I1024 20:07:31.928020   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 31/60
	I1024 20:07:32.929527   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 32/60
	I1024 20:07:33.930885   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 33/60
	I1024 20:07:34.932685   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 34/60
	I1024 20:07:35.934756   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 35/60
	I1024 20:07:36.936233   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 36/60
	I1024 20:07:37.937824   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 37/60
	I1024 20:07:38.939311   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 38/60
	I1024 20:07:39.940775   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 39/60
	I1024 20:07:40.942744   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 40/60
	I1024 20:07:41.943964   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 41/60
	I1024 20:07:42.945574   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 42/60
	I1024 20:07:43.946983   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 43/60
	I1024 20:07:44.948471   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 44/60
	I1024 20:07:45.950499   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 45/60
	I1024 20:07:46.951798   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 46/60
	I1024 20:07:47.953307   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 47/60
	I1024 20:07:48.954755   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 48/60
	I1024 20:07:49.956240   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 49/60
	I1024 20:07:50.958403   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 50/60
	I1024 20:07:51.959809   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 51/60
	I1024 20:07:52.961134   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 52/60
	I1024 20:07:53.962578   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 53/60
	I1024 20:07:54.963806   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 54/60
	I1024 20:07:55.965821   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 55/60
	I1024 20:07:56.967178   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 56/60
	I1024 20:07:57.968748   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 57/60
	I1024 20:07:58.970368   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 58/60
	I1024 20:07:59.971861   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 59/60
	I1024 20:08:00.973086   49378 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:08:00.973147   49378 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:08:00.973189   49378 retry.go:31] will retry after 627.825611ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:08:01.602011   49378 stop.go:39] StopHost: old-k8s-version-467375
	I1024 20:08:01.602353   49378 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:08:01.602396   49378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:08:01.616681   49378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1024 20:08:01.617111   49378 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:08:01.617608   49378 main.go:141] libmachine: Using API Version  1
	I1024 20:08:01.617632   49378 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:08:01.618052   49378 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:08:01.620529   49378 out.go:177] * Stopping node "old-k8s-version-467375"  ...
	I1024 20:08:01.622109   49378 main.go:141] libmachine: Stopping "old-k8s-version-467375"...
	I1024 20:08:01.622124   49378 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:08:01.623585   49378 main.go:141] libmachine: (old-k8s-version-467375) Calling .Stop
	I1024 20:08:01.626729   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 0/60
	I1024 20:08:02.628245   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 1/60
	I1024 20:08:03.629566   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 2/60
	I1024 20:08:04.631185   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 3/60
	I1024 20:08:05.632615   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 4/60
	I1024 20:08:06.633829   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 5/60
	I1024 20:08:07.635323   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 6/60
	I1024 20:08:08.636633   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 7/60
	I1024 20:08:09.638272   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 8/60
	I1024 20:08:10.639675   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 9/60
	I1024 20:08:11.641507   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 10/60
	I1024 20:08:12.642844   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 11/60
	I1024 20:08:13.644261   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 12/60
	I1024 20:08:14.645610   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 13/60
	I1024 20:08:15.647090   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 14/60
	I1024 20:08:16.649339   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 15/60
	I1024 20:08:17.650862   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 16/60
	I1024 20:08:18.652235   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 17/60
	I1024 20:08:19.653739   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 18/60
	I1024 20:08:20.655133   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 19/60
	I1024 20:08:21.657033   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 20/60
	I1024 20:08:22.658384   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 21/60
	I1024 20:08:23.659867   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 22/60
	I1024 20:08:24.661276   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 23/60
	I1024 20:08:25.663444   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 24/60
	I1024 20:08:26.664816   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 25/60
	I1024 20:08:27.666381   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 26/60
	I1024 20:08:28.667794   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 27/60
	I1024 20:08:29.669491   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 28/60
	I1024 20:08:30.670884   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 29/60
	I1024 20:08:31.672619   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 30/60
	I1024 20:08:32.673814   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 31/60
	I1024 20:08:33.676016   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 32/60
	I1024 20:08:34.677497   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 33/60
	I1024 20:08:35.678942   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 34/60
	I1024 20:08:36.680563   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 35/60
	I1024 20:08:37.682153   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 36/60
	I1024 20:08:38.683521   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 37/60
	I1024 20:08:39.685152   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 38/60
	I1024 20:08:40.686503   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 39/60
	I1024 20:08:41.688111   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 40/60
	I1024 20:08:42.689557   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 41/60
	I1024 20:08:43.691916   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 42/60
	I1024 20:08:44.693374   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 43/60
	I1024 20:08:45.695570   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 44/60
	I1024 20:08:46.696965   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 45/60
	I1024 20:08:47.698388   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 46/60
	I1024 20:08:48.699944   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 47/60
	I1024 20:08:49.701368   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 48/60
	I1024 20:08:50.702741   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 49/60
	I1024 20:08:51.704376   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 50/60
	I1024 20:08:52.705733   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 51/60
	I1024 20:08:53.707052   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 52/60
	I1024 20:08:54.708447   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 53/60
	I1024 20:08:55.709787   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 54/60
	I1024 20:08:56.711250   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 55/60
	I1024 20:08:57.712900   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 56/60
	I1024 20:08:58.714516   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 57/60
	I1024 20:08:59.716076   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 58/60
	I1024 20:09:00.717513   49378 main.go:141] libmachine: (old-k8s-version-467375) Waiting for machine to stop 59/60
	I1024 20:09:01.718856   49378 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1024 20:09:01.718895   49378 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1024 20:09:01.721123   49378 out.go:177] 
	W1024 20:09:01.722666   49378 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1024 20:09:01.722681   49378 out.go:239] * 
	* 
	W1024 20:09:01.725608   49378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:09:01.726969   49378 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-467375 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375: exit status 3 (18.680992668s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:09:20.409613   49891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	E1024 20:09:20.409640   49891 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-467375" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
E1024 20:07:53.605050   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126: exit status 3 (3.167900039s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:07:54.233626   49588 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	E1024 20:07:54.233645   49588 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-643126 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-643126 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15328822s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-643126 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126: exit status 3 (3.062203004s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:08:03.449627   49659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host
	E1024 20:08:03.449656   49659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-643126" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375: exit status 3 (3.167869661s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:09:23.577602   49965 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	E1024 20:09:23.577624   49965 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-467375 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-467375 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153650352s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-467375 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375: exit status 3 (3.062159794s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:09:32.793656   50036 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	E1024 20:09:32.793687   50036 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-467375" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-867165 -n embed-certs-867165
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:25:17.540990852 +0000 UTC m=+5079.306469847
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-867165 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-867165 logs -n 25: (1.650305552s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-636215                                        | pause-636215                 | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-145190                              | stopped-upgrade-145190       | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:09:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:09:32.850310   50077 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:09:32.850450   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850462   50077 out.go:309] Setting ErrFile to fd 2...
	I1024 20:09:32.850470   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850632   50077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:09:32.851152   50077 out.go:303] Setting JSON to false
	I1024 20:09:32.851985   50077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6471,"bootTime":1698171702,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:09:32.852046   50077 start.go:138] virtualization: kvm guest
	I1024 20:09:32.854420   50077 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:09:32.855945   50077 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:09:32.855955   50077 notify.go:220] Checking for updates...
	I1024 20:09:32.857502   50077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:09:32.858984   50077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:09:32.860444   50077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:09:32.861833   50077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:09:32.863229   50077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:09:32.864917   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:09:32.865284   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.865345   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.879470   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1024 20:09:32.879865   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.880332   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.880355   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.880731   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.880894   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.882647   50077 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:09:32.884050   50077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:09:32.884316   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.884351   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.897671   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I1024 20:09:32.898054   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.898495   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.898521   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.898837   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.899002   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.933365   50077 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:09:32.934993   50077 start.go:298] selected driver: kvm2
	I1024 20:09:32.935008   50077 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.935100   50077 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:09:32.935713   50077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.935789   50077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:09:32.949274   50077 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:09:32.949613   50077 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:09:32.949670   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:09:32.949682   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:09:32.949693   50077 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.949823   50077 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.951734   50077 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:09:31.289529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:32.953102   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:09:32.953131   50077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:09:32.953140   50077 cache.go:57] Caching tarball of preloaded images
	I1024 20:09:32.953220   50077 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:09:32.953230   50077 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:09:32.953361   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:09:32.953531   50077 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:09:37.369555   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:40.441571   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:46.521544   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:49.593529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:55.673497   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:58.745605   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:04.825563   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:07.897530   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:13.977541   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:17.049658   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:23.129561   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:26.201528   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:32.281583   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:35.353592   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:41.433570   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:44.505586   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:50.585514   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:53.657506   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:59.737620   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:02.809631   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:05.812536   49198 start.go:369] acquired machines lock for "embed-certs-867165" in 4m26.940203259s
	I1024 20:11:05.812584   49198 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:05.812594   49198 fix.go:54] fixHost starting: 
	I1024 20:11:05.812911   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:05.812959   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:05.827853   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I1024 20:11:05.828400   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:05.828896   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:05.828922   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:05.829237   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:05.829432   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:05.829588   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:05.831229   49198 fix.go:102] recreateIfNeeded on embed-certs-867165: state=Stopped err=<nil>
	I1024 20:11:05.831249   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	W1024 20:11:05.831407   49198 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:05.833007   49198 out.go:177] * Restarting existing kvm2 VM for "embed-certs-867165" ...
	I1024 20:11:05.810496   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:05.810546   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:11:05.812388   49071 machine.go:91] provisioned docker machine in 4m37.419019216s
	I1024 20:11:05.812422   49071 fix.go:56] fixHost completed within 4m37.4383256s
	I1024 20:11:05.812427   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 4m37.438344867s
	W1024 20:11:05.812453   49071 start.go:691] error starting host: provision: host is not running
	W1024 20:11:05.812538   49071 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1024 20:11:05.812551   49071 start.go:706] Will try again in 5 seconds ...
	I1024 20:11:05.834235   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Start
	I1024 20:11:05.834397   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring networks are active...
	I1024 20:11:05.835212   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network default is active
	I1024 20:11:05.835540   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network mk-embed-certs-867165 is active
	I1024 20:11:05.835850   49198 main.go:141] libmachine: (embed-certs-867165) Getting domain xml...
	I1024 20:11:05.836556   49198 main.go:141] libmachine: (embed-certs-867165) Creating domain...
	I1024 20:11:07.054253   49198 main.go:141] libmachine: (embed-certs-867165) Waiting to get IP...
	I1024 20:11:07.055379   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.055819   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.055911   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.055829   50328 retry.go:31] will retry after 212.147571ms: waiting for machine to come up
	I1024 20:11:07.269505   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.269953   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.270002   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.269942   50328 retry.go:31] will retry after 308.705783ms: waiting for machine to come up
	I1024 20:11:07.580602   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.581075   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.581103   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.581041   50328 retry.go:31] will retry after 467.682838ms: waiting for machine to come up
	I1024 20:11:08.050725   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.051121   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.051154   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.051070   50328 retry.go:31] will retry after 399.648518ms: waiting for machine to come up
	I1024 20:11:08.452605   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.452968   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.452999   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.452906   50328 retry.go:31] will retry after 617.165915ms: waiting for machine to come up
	I1024 20:11:10.812763   49071 start.go:365] acquiring machines lock for no-preload-014826: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:11:09.071803   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.072236   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.072268   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.072205   50328 retry.go:31] will retry after 678.895198ms: waiting for machine to come up
	I1024 20:11:09.753179   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.753658   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.753689   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.753600   50328 retry.go:31] will retry after 807.254598ms: waiting for machine to come up
	I1024 20:11:10.562345   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:10.562733   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:10.562761   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:10.562688   50328 retry.go:31] will retry after 921.950476ms: waiting for machine to come up
	I1024 20:11:11.485981   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:11.486498   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:11.486524   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:11.486452   50328 retry.go:31] will retry after 1.56679652s: waiting for machine to come up
	I1024 20:11:13.055209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:13.055638   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:13.055664   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:13.055594   50328 retry.go:31] will retry after 2.296157501s: waiting for machine to come up
	I1024 20:11:15.355156   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:15.355522   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:15.355555   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:15.355460   50328 retry.go:31] will retry after 1.913484523s: waiting for machine to come up
	I1024 20:11:17.270771   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:17.271200   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:17.271237   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:17.271154   50328 retry.go:31] will retry after 2.867410465s: waiting for machine to come up
	I1024 20:11:20.142209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:20.142651   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:20.142675   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:20.142603   50328 retry.go:31] will retry after 4.193720328s: waiting for machine to come up
	I1024 20:11:25.925856   49708 start.go:369] acquired machines lock for "default-k8s-diff-port-643126" in 3m22.313323811s
	I1024 20:11:25.925904   49708 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:25.925911   49708 fix.go:54] fixHost starting: 
	I1024 20:11:25.926296   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:25.926331   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:25.942871   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I1024 20:11:25.943321   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:25.943866   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:11:25.943890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:25.944187   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:25.944359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:25.944510   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:11:25.945833   49708 fix.go:102] recreateIfNeeded on default-k8s-diff-port-643126: state=Stopped err=<nil>
	I1024 20:11:25.945875   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	W1024 20:11:25.946039   49708 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:25.949057   49708 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-643126" ...
	I1024 20:11:24.340353   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.340876   49198 main.go:141] libmachine: (embed-certs-867165) Found IP for machine: 192.168.72.10
	I1024 20:11:24.340899   49198 main.go:141] libmachine: (embed-certs-867165) Reserving static IP address...
	I1024 20:11:24.340912   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has current primary IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.341389   49198 main.go:141] libmachine: (embed-certs-867165) Reserved static IP address: 192.168.72.10
	I1024 20:11:24.341430   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.341453   49198 main.go:141] libmachine: (embed-certs-867165) Waiting for SSH to be available...
	I1024 20:11:24.341482   49198 main.go:141] libmachine: (embed-certs-867165) DBG | skip adding static IP to network mk-embed-certs-867165 - found existing host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"}
	I1024 20:11:24.341500   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Getting to WaitForSSH function...
	I1024 20:11:24.343707   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344021   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.344050   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344202   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH client type: external
	I1024 20:11:24.344229   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa (-rw-------)
	I1024 20:11:24.344263   49198 main.go:141] libmachine: (embed-certs-867165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:24.344279   49198 main.go:141] libmachine: (embed-certs-867165) DBG | About to run SSH command:
	I1024 20:11:24.344290   49198 main.go:141] libmachine: (embed-certs-867165) DBG | exit 0
	I1024 20:11:24.433113   49198 main.go:141] libmachine: (embed-certs-867165) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:24.433578   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetConfigRaw
	I1024 20:11:24.434267   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.436768   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437149   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.437178   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437479   49198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/config.json ...
	I1024 20:11:24.437738   49198 machine.go:88] provisioning docker machine ...
	I1024 20:11:24.437760   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:24.438014   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438217   49198 buildroot.go:166] provisioning hostname "embed-certs-867165"
	I1024 20:11:24.438245   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438431   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.440509   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440861   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.440884   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440998   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.441155   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441329   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441499   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.441644   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.441990   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.442009   49198 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-867165 && echo "embed-certs-867165" | sudo tee /etc/hostname
	I1024 20:11:24.570417   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-867165
	
	I1024 20:11:24.570456   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.573010   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573421   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.573446   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573634   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.573845   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574000   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574100   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.574296   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.574611   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.574628   49198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-867165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-867165/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-867165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:24.698255   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:24.698281   49198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:24.698298   49198 buildroot.go:174] setting up certificates
	I1024 20:11:24.698306   49198 provision.go:83] configureAuth start
	I1024 20:11:24.698317   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.698624   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.701552   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.701900   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.701954   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.702044   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.704047   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704389   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.704413   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704578   49198 provision.go:138] copyHostCerts
	I1024 20:11:24.704632   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:24.704648   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:24.704713   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:24.704794   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:24.704801   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:24.704828   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:24.704877   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:24.704883   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:24.704901   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:24.704961   49198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.embed-certs-867165 san=[192.168.72.10 192.168.72.10 localhost 127.0.0.1 minikube embed-certs-867165]
	I1024 20:11:25.212018   49198 provision.go:172] copyRemoteCerts
	I1024 20:11:25.212075   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:25.212095   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.214791   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215112   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.215141   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215262   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.215490   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.215682   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.215805   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.301782   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:25.324352   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1024 20:11:25.346349   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:11:25.368012   49198 provision.go:86] duration metric: configureAuth took 669.695412ms
	I1024 20:11:25.368036   49198 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:25.368205   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:25.368269   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.370479   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370739   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.370782   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370873   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.371063   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371395   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371593   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.371760   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.372083   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.372098   49198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:25.685250   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:25.685327   49198 machine.go:91] provisioned docker machine in 1.247541762s
	I1024 20:11:25.685347   49198 start.go:300] post-start starting for "embed-certs-867165" (driver="kvm2")
	I1024 20:11:25.685363   49198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:25.685388   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.685781   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:25.685813   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.688378   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688666   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.688712   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688886   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.689115   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.689274   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.689463   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.775321   49198 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:25.779494   49198 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:25.779516   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:25.779590   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:25.779663   49198 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:25.779748   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:25.788441   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:25.809843   49198 start.go:303] post-start completed in 124.478424ms
	I1024 20:11:25.809946   49198 fix.go:56] fixHost completed within 19.997269664s
	I1024 20:11:25.809985   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.812709   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813101   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.813133   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813265   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.813464   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813650   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.813962   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.814293   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.814309   49198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:25.925691   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178285.873274561
	
	I1024 20:11:25.925721   49198 fix.go:206] guest clock: 1698178285.873274561
	I1024 20:11:25.925731   49198 fix.go:219] Guest: 2023-10-24 20:11:25.873274561 +0000 UTC Remote: 2023-10-24 20:11:25.809967209 +0000 UTC m=+287.089115618 (delta=63.307352ms)
	I1024 20:11:25.925760   49198 fix.go:190] guest clock delta is within tolerance: 63.307352ms
	I1024 20:11:25.925767   49198 start.go:83] releasing machines lock for "embed-certs-867165", held for 20.113201351s
	I1024 20:11:25.925801   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.926046   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:25.928979   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929337   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.929369   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929547   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930171   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930239   49198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:25.930285   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.930332   49198 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:25.930356   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.932685   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.932918   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933167   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933197   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933225   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933254   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933377   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933548   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933600   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933758   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933773   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933941   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.934075   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:26.046804   49198 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:26.052139   49198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:26.195404   49198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:26.201515   49198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:26.201602   49198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:26.215298   49198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:26.215312   49198 start.go:472] detecting cgroup driver to use...
	I1024 20:11:26.215375   49198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:26.228683   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:26.240279   49198 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:26.240328   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:26.252314   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:26.264748   49198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:26.363370   49198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:26.472219   49198 docker.go:214] disabling docker service ...
	I1024 20:11:26.472293   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:26.485325   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:26.497949   49198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:26.614981   49198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:26.731140   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:26.750199   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:26.770158   49198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:26.770224   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.781180   49198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:26.781246   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.791901   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.802435   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.812848   49198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:26.826330   49198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:26.837268   49198 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:26.837350   49198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:26.853637   49198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:26.866347   49198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:26.985185   49198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:27.154650   49198 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:27.154718   49198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:27.160801   49198 start.go:540] Will wait 60s for crictl version
	I1024 20:11:27.160848   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:11:27.164920   49198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:27.202690   49198 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:27.202779   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.250594   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.296108   49198 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:25.950421   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Start
	I1024 20:11:25.950594   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring networks are active...
	I1024 20:11:25.951296   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network default is active
	I1024 20:11:25.951666   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network mk-default-k8s-diff-port-643126 is active
	I1024 20:11:25.952059   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Getting domain xml...
	I1024 20:11:25.952807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Creating domain...
	I1024 20:11:27.231286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting to get IP...
	I1024 20:11:27.232283   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232673   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232749   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.232677   50444 retry.go:31] will retry after 208.58934ms: waiting for machine to come up
	I1024 20:11:27.443376   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443879   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443919   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.443821   50444 retry.go:31] will retry after 257.382495ms: waiting for machine to come up
	I1024 20:11:27.703424   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.703968   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.704002   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.703931   50444 retry.go:31] will retry after 397.047762ms: waiting for machine to come up
	I1024 20:11:28.102593   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103138   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103169   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.103091   50444 retry.go:31] will retry after 512.560427ms: waiting for machine to come up
	I1024 20:11:27.297540   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:27.300396   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.300799   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:27.300829   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.301066   49198 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:27.305045   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:27.320300   49198 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:27.320366   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:27.359702   49198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:27.359766   49198 ssh_runner.go:195] Run: which lz4
	I1024 20:11:27.363540   49198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:27.367559   49198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:27.367583   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:28.616845   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617310   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.617240   50444 retry.go:31] will retry after 674.554893ms: waiting for machine to come up
	I1024 20:11:29.293139   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293640   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293667   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:29.293603   50444 retry.go:31] will retry after 903.982479ms: waiting for machine to come up
	I1024 20:11:30.199764   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200218   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:30.200119   50444 retry.go:31] will retry after 835.036056ms: waiting for machine to come up
	I1024 20:11:31.037123   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037584   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037609   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:31.037524   50444 retry.go:31] will retry after 1.242617103s: waiting for machine to come up
	I1024 20:11:32.281808   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282287   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282312   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:32.282243   50444 retry.go:31] will retry after 1.694327665s: waiting for machine to come up
	I1024 20:11:29.249631   49198 crio.go:444] Took 1.886122 seconds to copy over tarball
	I1024 20:11:29.249712   49198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:32.249370   49198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.999632152s)
	I1024 20:11:32.249396   49198 crio.go:451] Took 2.999736 seconds to extract the tarball
	I1024 20:11:32.249404   49198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:32.290929   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:32.335293   49198 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:32.335313   49198 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:32.335377   49198 ssh_runner.go:195] Run: crio config
	I1024 20:11:32.394988   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:32.395016   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:32.395039   49198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:32.395066   49198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-867165 NodeName:embed-certs-867165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:32.395267   49198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-867165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:32.395363   49198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-867165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:11:32.395412   49198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:32.408764   49198 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:32.408827   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:32.417504   49198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:11:32.433991   49198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:32.450599   49198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1024 20:11:32.467822   49198 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:32.471830   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:32.485398   49198 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165 for IP: 192.168.72.10
	I1024 20:11:32.485440   49198 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:32.485591   49198 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:32.485627   49198 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:32.485692   49198 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/client.key
	I1024 20:11:32.485751   49198 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key.802f554a
	I1024 20:11:32.485787   49198 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key
	I1024 20:11:32.485883   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:32.485913   49198 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:32.485924   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:32.485946   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:32.485974   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:32.485999   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:32.486054   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:32.486664   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:32.510981   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:32.533691   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:32.556372   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:32.578805   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:32.601563   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:32.624846   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:32.648498   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:32.672429   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:32.696146   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:32.719078   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:32.742894   49198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:32.758998   49198 ssh_runner.go:195] Run: openssl version
	I1024 20:11:32.764797   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:32.774075   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778755   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778809   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.784097   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:32.793365   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:32.802532   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806890   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806936   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.812430   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:32.821767   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:32.830930   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835401   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835455   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.840880   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:32.850124   49198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:32.854525   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:32.860161   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:32.866096   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:32.873246   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:32.880430   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:32.887436   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:32.892960   49198 kubeadm.go:404] StartCluster: {Name:embed-certs-867165 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:32.893073   49198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:32.893116   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:32.930748   49198 cri.go:89] found id: ""
	I1024 20:11:32.930817   49198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:32.939716   49198 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:32.939738   49198 kubeadm.go:636] restartCluster start
	I1024 20:11:32.939785   49198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:32.947747   49198 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.948905   49198 kubeconfig.go:92] found "embed-certs-867165" server: "https://192.168.72.10:8443"
	I1024 20:11:32.951235   49198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:32.959165   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.959215   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.970896   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.970912   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.970957   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.980621   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.481345   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:33.481442   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:33.492666   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.979087   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979490   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979520   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:33.979433   50444 retry.go:31] will retry after 1.877176786s: waiting for machine to come up
	I1024 20:11:35.859337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859758   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:35.859683   50444 retry.go:31] will retry after 2.235459842s: waiting for machine to come up
	I1024 20:11:38.097481   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097958   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:38.097878   50444 retry.go:31] will retry after 3.083066899s: waiting for machine to come up
	I1024 20:11:33.981370   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.077568   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.088845   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.481489   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.481554   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.492934   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.981614   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.981744   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.993154   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.480679   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.480752   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.492474   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.981612   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.981703   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.992389   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.480877   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.480982   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.492142   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.980700   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.980784   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.992439   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.480962   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.481040   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.492219   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.980706   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.980814   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.992040   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:38.481668   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.481764   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.493319   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.182306   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182647   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182674   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:41.182602   50444 retry.go:31] will retry after 3.348794863s: waiting for machine to come up
	I1024 20:11:38.981418   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.981504   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.992810   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.481357   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.481448   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.492521   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.981019   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.981109   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.992766   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.481341   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.481404   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.492180   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.981106   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.981205   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.991931   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.481563   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.481629   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.492601   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.981132   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.981226   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.992061   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.481647   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:42.481713   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:42.492524   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.960175   49198 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:11:42.960230   49198 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:11:42.960243   49198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:11:42.960322   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:42.998685   49198 cri.go:89] found id: ""
	I1024 20:11:42.998794   49198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:11:43.013829   49198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:11:43.023081   49198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:11:43.023161   49198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032165   49198 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032189   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:43.148027   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:45.942484   50077 start.go:369] acquired machines lock for "old-k8s-version-467375" in 2m12.988914754s
	I1024 20:11:45.942540   50077 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:45.942548   50077 fix.go:54] fixHost starting: 
	I1024 20:11:45.942969   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:45.943007   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:45.960424   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I1024 20:11:45.960851   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:45.961468   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:11:45.961498   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:45.961852   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:45.962045   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:11:45.962231   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:11:45.963803   50077 fix.go:102] recreateIfNeeded on old-k8s-version-467375: state=Stopped err=<nil>
	I1024 20:11:45.963841   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	W1024 20:11:45.964018   50077 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:45.965809   50077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467375" ...
	I1024 20:11:44.535120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535710   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Found IP for machine: 192.168.61.148
	I1024 20:11:44.535735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has current primary IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserving static IP address...
	I1024 20:11:44.536160   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserved static IP address: 192.168.61.148
	I1024 20:11:44.536181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for SSH to be available...
	I1024 20:11:44.536196   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.536225   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | skip adding static IP to network mk-default-k8s-diff-port-643126 - found existing host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"}
	I1024 20:11:44.536247   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Getting to WaitForSSH function...
	I1024 20:11:44.538297   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538634   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.538669   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH client type: external
	I1024 20:11:44.538846   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa (-rw-------)
	I1024 20:11:44.538897   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:44.538935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | About to run SSH command:
	I1024 20:11:44.538947   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | exit 0
	I1024 20:11:44.629136   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:44.629505   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetConfigRaw
	I1024 20:11:44.630190   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.632462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.632782   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.632807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.633035   49708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/config.json ...
	I1024 20:11:44.633215   49708 machine.go:88] provisioning docker machine ...
	I1024 20:11:44.633231   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:44.633416   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633566   49708 buildroot.go:166] provisioning hostname "default-k8s-diff-port-643126"
	I1024 20:11:44.633580   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633778   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.635853   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636184   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.636217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636295   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.636462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636608   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.636890   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.637307   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.637328   49708 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643126 && echo "default-k8s-diff-port-643126" | sudo tee /etc/hostname
	I1024 20:11:44.775436   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643126
	
	I1024 20:11:44.775468   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.778835   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779280   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.779316   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779494   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.779679   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779933   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.780147   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.780489   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.780518   49708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643126/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:44.921274   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:44.921332   49708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:44.921368   49708 buildroot.go:174] setting up certificates
	I1024 20:11:44.921385   49708 provision.go:83] configureAuth start
	I1024 20:11:44.921404   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.921747   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.924977   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925413   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.925443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925641   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.928106   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.928484   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928617   49708 provision.go:138] copyHostCerts
	I1024 20:11:44.928680   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:44.928703   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:44.928772   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:44.928918   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:44.928935   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:44.928969   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:44.929052   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:44.929063   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:44.929089   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:44.929157   49708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643126 san=[192.168.61.148 192.168.61.148 localhost 127.0.0.1 minikube default-k8s-diff-port-643126]
	I1024 20:11:45.170614   49708 provision.go:172] copyRemoteCerts
	I1024 20:11:45.170679   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:45.170706   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.173876   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.174251   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.174744   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.174909   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.175033   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.266012   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 20:11:45.294626   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:11:45.323773   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:45.347515   49708 provision.go:86] duration metric: configureAuth took 426.107365ms
	I1024 20:11:45.347536   49708 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:45.347741   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:45.347830   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.351151   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.351560   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351729   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.351898   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.352540   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.353017   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.353045   49708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:45.673767   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:45.673797   49708 machine.go:91] provisioned docker machine in 1.04057128s
	I1024 20:11:45.673809   49708 start.go:300] post-start starting for "default-k8s-diff-port-643126" (driver="kvm2")
	I1024 20:11:45.673821   49708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:45.673844   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.674180   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:45.674213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.677192   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677621   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.677663   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677817   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.678021   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.678180   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.678322   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.769507   49708 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:45.774136   49708 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:45.774161   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:45.774240   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:45.774333   49708 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:45.774456   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:45.782941   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:45.806536   49708 start.go:303] post-start completed in 132.710109ms
	I1024 20:11:45.806565   49708 fix.go:56] fixHost completed within 19.880653804s
	I1024 20:11:45.806602   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.809496   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.809854   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.809892   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.810096   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.810335   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810697   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.810870   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.811297   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.811312   49708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:45.942309   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178305.886866858
	
	I1024 20:11:45.942334   49708 fix.go:206] guest clock: 1698178305.886866858
	I1024 20:11:45.942343   49708 fix.go:219] Guest: 2023-10-24 20:11:45.886866858 +0000 UTC Remote: 2023-10-24 20:11:45.806569839 +0000 UTC m=+222.349889294 (delta=80.297019ms)
	I1024 20:11:45.942388   49708 fix.go:190] guest clock delta is within tolerance: 80.297019ms
	I1024 20:11:45.942399   49708 start.go:83] releasing machines lock for "default-k8s-diff-port-643126", held for 20.016514097s
	I1024 20:11:45.942428   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.942819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:45.946079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946507   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.946548   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946681   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947353   49708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:45.947411   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.947564   49708 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:45.947591   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.950504   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.950930   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.950984   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951010   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951176   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.951499   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.951522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.951526   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951638   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.951793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951946   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.952178   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.952345   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:46.043544   49708 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:46.072510   49708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:46.230010   49708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:46.237538   49708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:46.237608   49708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:46.259449   49708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:46.259468   49708 start.go:472] detecting cgroup driver to use...
	I1024 20:11:46.259530   49708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:46.278708   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:46.292769   49708 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:46.292827   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:46.311808   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:46.329420   49708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:46.452375   49708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:46.584041   49708 docker.go:214] disabling docker service ...
	I1024 20:11:46.584114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:46.606114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:46.623302   49708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:46.732771   49708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:46.862687   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:46.879573   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:46.900885   49708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:46.900955   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.911441   49708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:46.911500   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.921674   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.931937   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.942104   49708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:46.952610   49708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:46.961808   49708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:46.961884   49708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:46.977789   49708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:46.990089   49708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:47.130248   49708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:47.307336   49708 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:47.307402   49708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:47.316743   49708 start.go:540] Will wait 60s for crictl version
	I1024 20:11:47.316795   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:11:47.321526   49708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:47.369079   49708 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:47.369169   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.419428   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.477016   49708 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:45.967071   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Start
	I1024 20:11:45.967249   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring networks are active...
	I1024 20:11:45.967957   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network default is active
	I1024 20:11:45.968324   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network mk-old-k8s-version-467375 is active
	I1024 20:11:45.968743   50077 main.go:141] libmachine: (old-k8s-version-467375) Getting domain xml...
	I1024 20:11:45.969525   50077 main.go:141] libmachine: (old-k8s-version-467375) Creating domain...
	I1024 20:11:47.346548   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting to get IP...
	I1024 20:11:47.347505   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.347894   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.347980   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.347887   50579 retry.go:31] will retry after 232.244798ms: waiting for machine to come up
	I1024 20:11:47.581582   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.582087   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.582118   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.582044   50579 retry.go:31] will retry after 319.930019ms: waiting for machine to come up
	I1024 20:11:47.478565   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:47.481659   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482040   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:47.482066   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482265   49708 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:47.487054   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:47.499693   49708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:47.499765   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:47.551897   49708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:47.551978   49708 ssh_runner.go:195] Run: which lz4
	I1024 20:11:47.557026   49708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:47.562364   49708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:47.562393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:43.852350   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.048386   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.117774   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.202966   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:11:44.203042   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.215680   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.726471   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.226100   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.726494   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.226510   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.726607   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.758294   49198 api_server.go:72] duration metric: took 2.555329199s to wait for apiserver process to appear ...
	I1024 20:11:46.758319   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:11:46.758339   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.758872   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:46.758909   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.759318   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:47.260047   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.910793   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.910830   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:50.910852   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.943069   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.943100   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:51.259498   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.265278   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.265316   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:51.759494   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.767253   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.767280   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:52.259758   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:52.265202   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:11:52.277533   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:11:52.277561   49198 api_server.go:131] duration metric: took 5.51923389s to wait for apiserver health ...
	I1024 20:11:52.277572   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:52.277580   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:52.279542   49198 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:11:47.904065   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.904524   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.904551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.904467   50579 retry.go:31] will retry after 440.170251ms: waiting for machine to come up
	I1024 20:11:48.346206   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.346778   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.346802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.346686   50579 retry.go:31] will retry after 472.001777ms: waiting for machine to come up
	I1024 20:11:48.820100   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.820625   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.820660   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.820533   50579 retry.go:31] will retry after 487.055032ms: waiting for machine to come up
	I1024 20:11:49.309351   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:49.309816   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:49.309836   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:49.309751   50579 retry.go:31] will retry after 945.474211ms: waiting for machine to come up
	I1024 20:11:50.257106   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:50.257611   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:50.257641   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:50.257563   50579 retry.go:31] will retry after 915.312538ms: waiting for machine to come up
	I1024 20:11:51.174245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:51.174832   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:51.174889   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:51.174792   50579 retry.go:31] will retry after 1.09533855s: waiting for machine to come up
	I1024 20:11:52.271604   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:52.272082   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:52.272111   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:52.272041   50579 retry.go:31] will retry after 1.411155014s: waiting for machine to come up
	I1024 20:11:49.517078   49708 crio.go:444] Took 1.960093 seconds to copy over tarball
	I1024 20:11:49.517170   49708 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:53.113830   49708 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.596633239s)
	I1024 20:11:53.113858   49708 crio.go:451] Took 3.596755 seconds to extract the tarball
	I1024 20:11:53.113865   49708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:53.157476   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:53.204980   49708 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:53.205004   49708 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:53.205090   49708 ssh_runner.go:195] Run: crio config
	I1024 20:11:53.264588   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:11:53.264613   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:53.264634   49708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:53.264662   49708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.148 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643126 NodeName:default-k8s-diff-port-643126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:53.264869   49708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643126"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:53.264975   49708 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-643126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1024 20:11:53.265054   49708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:53.275886   49708 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:53.275982   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:53.286132   49708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1024 20:11:53.303735   49708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:53.319522   49708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1024 20:11:53.338388   49708 ssh_runner.go:195] Run: grep 192.168.61.148	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:53.343108   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:53.355662   49708 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126 for IP: 192.168.61.148
	I1024 20:11:53.355709   49708 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:53.355873   49708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:53.355910   49708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:53.356023   49708 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.key
	I1024 20:11:53.356086   49708 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key.8ba5a111
	I1024 20:11:53.356122   49708 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key
	I1024 20:11:53.356237   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:53.356265   49708 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:53.356275   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:53.356299   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:53.356320   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:53.356341   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:53.356377   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:53.357029   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:53.379968   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:53.401871   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:53.423699   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:53.445338   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:53.469994   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:53.495061   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:52.281055   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:11:52.299421   49198 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:11:52.322020   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:11:52.334273   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:11:52.334318   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:11:52.334332   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:11:52.334356   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:11:52.334372   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:11:52.334389   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:11:52.334401   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:11:52.334413   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:11:52.334425   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:11:52.334438   49198 system_pods.go:74] duration metric: took 12.395036ms to wait for pod list to return data ...
	I1024 20:11:52.334450   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:11:52.338486   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:11:52.338518   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:11:52.338530   49198 node_conditions.go:105] duration metric: took 4.073559ms to run NodePressure ...
	I1024 20:11:52.338555   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:55.075569   49198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.736987276s)
	I1024 20:11:55.075611   49198 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080481   49198 kubeadm.go:787] kubelet initialised
	I1024 20:11:55.080508   49198 kubeadm.go:788] duration metric: took 4.884507ms waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080519   49198 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:55.087371   49198 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.092583   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092616   49198 pod_ready.go:81] duration metric: took 5.215308ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.092627   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092636   49198 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.098518   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098540   49198 pod_ready.go:81] duration metric: took 5.887969ms waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.098551   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098560   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.103375   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103400   49198 pod_ready.go:81] duration metric: took 4.83092ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.103411   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103419   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.108416   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108443   49198 pod_ready.go:81] duration metric: took 5.016219ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.108454   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108462   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.482846   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482873   49198 pod_ready.go:81] duration metric: took 374.401616ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.482885   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482897   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.879895   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879922   49198 pod_ready.go:81] duration metric: took 397.016576ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.879935   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879947   49198 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:56.280405   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280445   49198 pod_ready.go:81] duration metric: took 400.488591ms waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:56.280464   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280475   49198 pod_ready.go:38] duration metric: took 1.19994252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:56.280498   49198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:11:56.298423   49198 ops.go:34] apiserver oom_adj: -16
	I1024 20:11:56.298445   49198 kubeadm.go:640] restartCluster took 23.358699894s
	I1024 20:11:56.298455   49198 kubeadm.go:406] StartCluster complete in 23.405500606s
	I1024 20:11:56.298474   49198 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.298551   49198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:11:56.300724   49198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.300999   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:11:56.301104   49198 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:11:56.301193   49198 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-867165"
	I1024 20:11:56.301203   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:56.301216   49198 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-867165"
	W1024 20:11:56.301261   49198 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:11:56.301260   49198 addons.go:69] Setting metrics-server=true in profile "embed-certs-867165"
	I1024 20:11:56.301290   49198 addons.go:69] Setting default-storageclass=true in profile "embed-certs-867165"
	I1024 20:11:56.301312   49198 addons.go:231] Setting addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:56.301315   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	W1024 20:11:56.301328   49198 addons.go:240] addon metrics-server should already be in state true
	I1024 20:11:56.301331   49198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-867165"
	I1024 20:11:56.301418   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.301743   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301744   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301767   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301771   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301826   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301867   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.307030   49198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-867165" context rescaled to 1 replicas
	I1024 20:11:56.307062   49198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:11:56.309053   49198 out.go:177] * Verifying Kubernetes components...
	I1024 20:11:56.310743   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:11:56.317523   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1024 20:11:56.317889   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.318430   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.318450   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.318881   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.319437   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.319486   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.320723   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1024 20:11:56.320906   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1024 20:11:56.321377   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.321491   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.322079   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322107   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322370   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322389   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322464   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322770   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.323410   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.323444   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.326654   49198 addons.go:231] Setting addon default-storageclass=true in "embed-certs-867165"
	W1024 20:11:56.326674   49198 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:11:56.326700   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.327084   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.327111   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.335811   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I1024 20:11:56.336310   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.336762   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.336774   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.337109   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.337272   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.338868   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.340964   49198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:11:56.342438   49198 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.342454   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:11:56.342472   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.341955   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I1024 20:11:56.343402   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.344019   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.344038   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.344502   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.344694   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.345753   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346097   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I1024 20:11:56.346367   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.346398   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346660   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.346666   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.346829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.348534   49198 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:11:53.684729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:53.685093   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:53.685129   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:53.685030   50579 retry.go:31] will retry after 1.793178726s: waiting for machine to come up
	I1024 20:11:55.481150   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:55.481696   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:55.481729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:55.481639   50579 retry.go:31] will retry after 2.680463816s: waiting for machine to come up
	I1024 20:11:56.347164   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.347192   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.350114   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.350141   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:11:56.350155   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:11:56.350174   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.350270   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.350397   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.350847   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.351478   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.351514   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.354060   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354451   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.354472   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354625   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.354819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.354978   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.355161   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.371309   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1024 20:11:56.371746   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.372300   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.372325   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.372764   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.372981   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.374651   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.374894   49198 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.374911   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:11:56.374934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.377962   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378385   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.378408   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378585   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.378789   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.378954   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.379083   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.471271   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.504355   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:11:56.504382   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:11:56.552351   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.576037   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:11:56.576068   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:11:56.606745   49198 node_ready.go:35] waiting up to 6m0s for node "embed-certs-867165" to be "Ready" ...
	I1024 20:11:56.606772   49198 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:11:56.620862   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:56.620897   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:11:56.676519   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:57.851757   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.380440836s)
	I1024 20:11:57.851814   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851816   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299429923s)
	I1024 20:11:57.851829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.851865   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851882   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852242   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852262   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852272   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852282   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852368   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852412   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852441   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852467   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852412   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852537   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852560   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852814   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852859   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852877   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860105   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183533543s)
	I1024 20:11:57.860176   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860195   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860492   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860494   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860515   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860526   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860537   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860828   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860857   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860876   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860886   49198 addons.go:467] Verifying addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:57.860990   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.861011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.861220   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.861227   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.861236   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.864370   49198 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1024 20:11:53.521030   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:53.844700   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:53.868393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:53.892495   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:53.916345   49708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:53.935576   49708 ssh_runner.go:195] Run: openssl version
	I1024 20:11:53.943066   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:53.957325   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.962959   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.963026   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.969104   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:53.980253   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:53.990977   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995906   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995992   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:54.001847   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:54.012635   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:54.023490   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028300   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028355   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.033965   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:54.044984   49708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:54.049588   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:54.055434   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:54.061692   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:54.068131   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:54.074484   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:54.080349   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:54.086499   49708 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-643126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:54.086598   49708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:54.086655   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:54.127406   49708 cri.go:89] found id: ""
	I1024 20:11:54.127494   49708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:54.137720   49708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:54.137743   49708 kubeadm.go:636] restartCluster start
	I1024 20:11:54.137801   49708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:54.147925   49708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.149006   49708 kubeconfig.go:92] found "default-k8s-diff-port-643126" server: "https://192.168.61.148:8444"
	I1024 20:11:54.151513   49708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:54.162303   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.162371   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.173715   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.173763   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.173816   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.184641   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.685342   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.685431   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.698640   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.185173   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.185284   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.201355   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.684814   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.684885   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.696664   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.185711   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.185795   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.201419   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.684932   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.685029   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.701458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.185009   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.185111   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.201166   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.685654   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.685739   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.701496   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:58.185022   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.185076   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.197394   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.865715   49198 addons.go:502] enable addons completed in 1.564611111s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1024 20:11:58.683275   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:58.163942   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:58.164342   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:58.164369   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:58.164308   50579 retry.go:31] will retry after 2.238050336s: waiting for machine to come up
	I1024 20:12:00.403552   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:00.403947   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:00.403975   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:00.403907   50579 retry.go:31] will retry after 3.901299207s: waiting for machine to come up
	I1024 20:11:58.685131   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.685225   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.700458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.184854   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.184936   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.200498   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.685159   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.685260   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.698793   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.185350   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.185418   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.200046   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.685255   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.685341   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.698229   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.185036   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.185105   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.200083   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.685617   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.685700   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.697442   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.184897   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.184980   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.196208   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.685769   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.685854   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.697356   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:03.184898   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.184977   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.196522   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.684425   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:12:01.683130   49198 node_ready.go:49] node "embed-certs-867165" has status "Ready":"True"
	I1024 20:12:01.683154   49198 node_ready.go:38] duration metric: took 5.076371929s waiting for node "embed-certs-867165" to be "Ready" ...
	I1024 20:12:01.683162   49198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:01.689566   49198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695393   49198 pod_ready.go:92] pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:01.695416   49198 pod_ready.go:81] duration metric: took 5.827696ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695427   49198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:03.712775   49198 pod_ready.go:102] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:04.306338   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:04.306804   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:04.306835   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:04.306770   50579 retry.go:31] will retry after 5.15211395s: waiting for machine to come up
	I1024 20:12:03.685737   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.685827   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.697510   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:04.163385   49708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:04.163416   49708 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:04.163449   49708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:04.163520   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:04.209780   49708 cri.go:89] found id: ""
	I1024 20:12:04.209834   49708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:04.226347   49708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:04.235134   49708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:04.235185   49708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243361   49708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243380   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:04.370510   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.461155   49708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090606159s)
	I1024 20:12:05.461192   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.649281   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.742338   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.829426   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:05.829494   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:05.841869   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.356907   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.856157   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.356140   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.856020   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.356129   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.382595   49708 api_server.go:72] duration metric: took 2.553177252s to wait for apiserver process to appear ...
	I1024 20:12:08.382622   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:08.382641   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:04.213550   49198 pod_ready.go:92] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.213573   49198 pod_ready.go:81] duration metric: took 2.518138084s waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.213585   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218813   49198 pod_ready.go:92] pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.218841   49198 pod_ready.go:81] duration metric: took 5.247061ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218855   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224562   49198 pod_ready.go:92] pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.224585   49198 pod_ready.go:81] duration metric: took 5.720637ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224597   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484197   49198 pod_ready.go:92] pod "kube-proxy-thkqr" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.484216   49198 pod_ready.go:81] duration metric: took 259.611869ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484224   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883941   49198 pod_ready.go:92] pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.883968   49198 pod_ready.go:81] duration metric: took 399.73679ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883982   49198 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:07.193414   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:10.878419   49071 start.go:369] acquired machines lock for "no-preload-014826" in 1m0.065559113s
	I1024 20:12:10.878467   49071 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:12:10.878475   49071 fix.go:54] fixHost starting: 
	I1024 20:12:10.878869   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:10.878901   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:10.898307   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I1024 20:12:10.898732   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:10.899250   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:12:10.899268   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:10.899614   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:10.899790   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:10.899933   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:12:10.901569   49071 fix.go:102] recreateIfNeeded on no-preload-014826: state=Stopped err=<nil>
	I1024 20:12:10.901593   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	W1024 20:12:10.901753   49071 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:12:10.904367   49071 out.go:177] * Restarting existing kvm2 VM for "no-preload-014826" ...
	I1024 20:12:09.462373   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.462813   50077 main.go:141] libmachine: (old-k8s-version-467375) Found IP for machine: 192.168.39.71
	I1024 20:12:09.462836   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserving static IP address...
	I1024 20:12:09.462853   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has current primary IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.463385   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.463423   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserved static IP address: 192.168.39.71
	I1024 20:12:09.463442   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | skip adding static IP to network mk-old-k8s-version-467375 - found existing host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"}
	I1024 20:12:09.463463   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Getting to WaitForSSH function...
	I1024 20:12:09.463484   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting for SSH to be available...
	I1024 20:12:09.465635   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.465951   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.465979   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.466131   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH client type: external
	I1024 20:12:09.466167   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa (-rw-------)
	I1024 20:12:09.466210   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:09.466227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | About to run SSH command:
	I1024 20:12:09.466256   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | exit 0
	I1024 20:12:09.565274   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:09.565647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetConfigRaw
	I1024 20:12:09.566251   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.569078   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.569585   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569863   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:12:09.570097   50077 machine.go:88] provisioning docker machine ...
	I1024 20:12:09.570122   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:09.570355   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570604   50077 buildroot.go:166] provisioning hostname "old-k8s-version-467375"
	I1024 20:12:09.570634   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570807   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.573170   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573560   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.573587   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573757   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.573934   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574080   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.574414   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.574840   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.574858   50077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467375 && echo "old-k8s-version-467375" | sudo tee /etc/hostname
	I1024 20:12:09.718150   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467375
	
	I1024 20:12:09.718201   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.721079   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721461   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.721495   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721653   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.721865   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722016   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722167   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.722324   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.722712   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.722732   50077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467375/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:09.865069   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:09.865098   50077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:09.865125   50077 buildroot.go:174] setting up certificates
	I1024 20:12:09.865136   50077 provision.go:83] configureAuth start
	I1024 20:12:09.865151   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.865449   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.868055   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868480   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.868513   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868693   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.870838   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871203   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.871227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871363   50077 provision.go:138] copyHostCerts
	I1024 20:12:09.871411   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:09.871423   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:09.871490   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:09.871613   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:09.871625   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:09.871655   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:09.871743   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:09.871753   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:09.871783   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:09.871856   50077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467375 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube old-k8s-version-467375]
	I1024 20:12:10.091178   50077 provision.go:172] copyRemoteCerts
	I1024 20:12:10.091229   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:10.091253   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.094245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094550   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.094590   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094759   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.094955   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.095123   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.095271   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.192715   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:12:10.216110   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:10.239468   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:12:10.263113   50077 provision.go:86] duration metric: configureAuth took 397.957727ms
	I1024 20:12:10.263138   50077 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:10.263366   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:12:10.263480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.265995   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266293   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.266334   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266467   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.266696   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.266863   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.267027   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.267168   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.267653   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.267677   50077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:10.596009   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:10.596032   50077 machine.go:91] provisioned docker machine in 1.025920355s
	I1024 20:12:10.596041   50077 start.go:300] post-start starting for "old-k8s-version-467375" (driver="kvm2")
	I1024 20:12:10.596050   50077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:10.596075   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.596415   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:10.596450   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.598886   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599234   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.599259   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599446   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.599647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.599812   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.599955   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.697045   50077 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:10.701363   50077 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:10.701387   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:10.701458   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:10.701546   50077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:10.701653   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:10.712072   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:10.737471   50077 start.go:303] post-start completed in 141.415073ms
	I1024 20:12:10.737508   50077 fix.go:56] fixHost completed within 24.794946143s
	I1024 20:12:10.737533   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.740438   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.740792   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.740820   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.741024   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.741247   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741428   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741691   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.741861   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.742407   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.742431   50077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:10.878250   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178330.824734287
	
	I1024 20:12:10.878273   50077 fix.go:206] guest clock: 1698178330.824734287
	I1024 20:12:10.878283   50077 fix.go:219] Guest: 2023-10-24 20:12:10.824734287 +0000 UTC Remote: 2023-10-24 20:12:10.737513672 +0000 UTC m=+157.935911605 (delta=87.220615ms)
	I1024 20:12:10.878307   50077 fix.go:190] guest clock delta is within tolerance: 87.220615ms
	I1024 20:12:10.878314   50077 start.go:83] releasing machines lock for "old-k8s-version-467375", held for 24.935800385s
	I1024 20:12:10.878347   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.878614   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:10.881335   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881746   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.881784   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881933   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882442   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882741   50077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:10.882801   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.882860   50077 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:10.882886   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.885640   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.885856   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886047   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886070   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886276   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886315   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886383   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886439   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886535   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886579   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886683   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886699   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.886816   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:11.006700   50077 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:11.012734   50077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:11.162399   50077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:11.169673   50077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:11.169751   50077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:11.184770   50077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:11.184794   50077 start.go:472] detecting cgroup driver to use...
	I1024 20:12:11.184858   50077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:11.202317   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:11.218122   50077 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:11.218187   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:11.233177   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:11.247591   50077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:11.387195   50077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:11.520544   50077 docker.go:214] disabling docker service ...
	I1024 20:12:11.520615   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:11.539166   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:11.552957   50077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:11.710494   50077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:11.837532   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:11.854418   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:11.874953   50077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 20:12:11.875040   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.887115   50077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:11.887206   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.898994   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.908652   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.918280   50077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:11.930870   50077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:11.939522   50077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:11.939580   50077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:11.955005   50077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:11.965173   50077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:12.098480   50077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:12.296897   50077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:12.296993   50077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:12.302906   50077 start.go:540] Will wait 60s for crictl version
	I1024 20:12:12.302956   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:12.307142   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:12.353253   50077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:12.353369   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.417241   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.486375   50077 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1024 20:12:12.487819   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:12.491366   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.491830   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:12.491862   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.492054   50077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:12.497705   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:12.514116   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:12:12.514208   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:12.569171   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:12.569247   50077 ssh_runner.go:195] Run: which lz4
	I1024 20:12:12.574729   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:12:12.579319   50077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:12:12.579364   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 20:12:10.905856   49071 main.go:141] libmachine: (no-preload-014826) Calling .Start
	I1024 20:12:10.906027   49071 main.go:141] libmachine: (no-preload-014826) Ensuring networks are active...
	I1024 20:12:10.906761   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network default is active
	I1024 20:12:10.907112   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network mk-no-preload-014826 is active
	I1024 20:12:10.907486   49071 main.go:141] libmachine: (no-preload-014826) Getting domain xml...
	I1024 20:12:10.908225   49071 main.go:141] libmachine: (no-preload-014826) Creating domain...
	I1024 20:12:12.324832   49071 main.go:141] libmachine: (no-preload-014826) Waiting to get IP...
	I1024 20:12:12.326055   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.326595   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.326695   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.326594   50821 retry.go:31] will retry after 197.462386ms: waiting for machine to come up
	I1024 20:12:12.526293   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.526743   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.526774   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.526720   50821 retry.go:31] will retry after 271.486585ms: waiting for machine to come up
	I1024 20:12:12.800360   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.801756   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.801940   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.801863   50821 retry.go:31] will retry after 486.882671ms: waiting for machine to come up
	I1024 20:12:12.479397   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.479431   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.479445   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:12.490441   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.490470   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.990764   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.006526   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.006556   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:13.490974   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.499731   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.499764   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:09.195216   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:11.694410   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.698362   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.991467   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:14.011775   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:12:14.048756   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:12:14.048791   49708 api_server.go:131] duration metric: took 5.666161032s to wait for apiserver health ...
	I1024 20:12:14.048802   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:12:14.048812   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:14.050652   49708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:14.052331   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:14.086953   49708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:14.142753   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:14.162085   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:12:14.162211   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:14.162246   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:12:14.162280   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:12:14.162307   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:12:14.162330   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:12:14.162352   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:12:14.162375   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:12:14.162411   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:12:14.162434   49708 system_pods.go:74] duration metric: took 19.657104ms to wait for pod list to return data ...
	I1024 20:12:14.162456   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:14.173042   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:14.173078   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:14.173093   49708 node_conditions.go:105] duration metric: took 10.618815ms to run NodePressure ...
	I1024 20:12:14.173117   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:14.763495   49708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768626   49708 kubeadm.go:787] kubelet initialised
	I1024 20:12:14.768653   49708 kubeadm.go:788] duration metric: took 5.128553ms waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768663   49708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:14.788128   49708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.800546   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800582   49708 pod_ready.go:81] duration metric: took 12.417978ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.800597   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800610   49708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.808416   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808448   49708 pod_ready.go:81] duration metric: took 7.821099ms waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.808463   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808472   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.814286   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814317   49708 pod_ready.go:81] duration metric: took 5.833548ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.814331   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814341   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.825548   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825582   49708 pod_ready.go:81] duration metric: took 11.230382ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.825596   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825606   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.168279   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168323   49708 pod_ready.go:81] duration metric: took 342.707312ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.168338   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168351   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.567697   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567735   49708 pod_ready.go:81] duration metric: took 399.371702ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.567750   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567838   49708 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.967716   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967750   49708 pod_ready.go:81] duration metric: took 399.892272ms waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.967764   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967773   49708 pod_ready.go:38] duration metric: took 1.199098599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:15.967793   49708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:12:15.986399   49708 ops.go:34] apiserver oom_adj: -16
	I1024 20:12:15.986422   49708 kubeadm.go:640] restartCluster took 21.848673162s
	I1024 20:12:15.986430   49708 kubeadm.go:406] StartCluster complete in 21.899940105s
	I1024 20:12:15.986444   49708 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.986545   49708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:12:15.989108   49708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.989647   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:15.989617   49708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:12:15.989715   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:12:15.989719   49708 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989736   49708 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-643126"
	W1024 20:12:15.989752   49708 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:12:15.989752   49708 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989775   49708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643126"
	I1024 20:12:15.989786   49708 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989802   49708 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:15.989804   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	W1024 20:12:15.989809   49708 addons.go:240] addon metrics-server should already be in state true
	I1024 20:12:15.989849   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:15.990183   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990192   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990246   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990294   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990209   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.995810   49708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643126" context rescaled to 1 replicas
	I1024 20:12:15.995838   49708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:12:15.998001   49708 out.go:177] * Verifying Kubernetes components...
	I1024 20:12:16.001589   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:12:16.010690   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I1024 20:12:16.011310   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.011861   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.011890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.012279   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.012906   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.012960   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.013706   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1024 20:12:16.014057   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.014533   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.014560   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.014905   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.015330   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I1024 20:12:16.015444   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.015486   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.015703   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.016168   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.016188   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.016591   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.016763   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.020428   49708 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-643126"
	W1024 20:12:16.020448   49708 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:12:16.020474   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:16.020840   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.020873   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.031538   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I1024 20:12:16.033822   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.034350   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.034367   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.034746   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.034802   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I1024 20:12:16.034978   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.035073   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.035525   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.035549   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.035943   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.036217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.036694   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.038891   49708 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:12:16.037871   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.040815   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:12:16.040832   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:12:16.040851   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.042238   49708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:14.393634   50077 crio.go:444] Took 1.818945 seconds to copy over tarball
	I1024 20:12:14.393720   50077 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:12:17.795931   50077 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.402175992s)
	I1024 20:12:17.795962   50077 crio.go:451] Took 3.402303 seconds to extract the tarball
	I1024 20:12:17.795974   50077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:12:17.841100   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:16.043742   49708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.043758   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:12:16.043775   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.046924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.047003   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047035   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.047068   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047224   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.049392   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049433   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.049469   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049487   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1024 20:12:16.049492   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.049976   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.050488   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.050502   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.050534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.050712   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.050810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.050844   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.050974   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.051292   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.051327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.051585   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.067412   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I1024 20:12:16.067810   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.068428   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.068445   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.068991   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.069222   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.070923   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.071196   49708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.071219   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:12:16.071238   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.074735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075400   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.075431   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075630   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.075796   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.075935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.076097   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.201177   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:12:16.201198   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:12:16.224757   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.247200   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:12:16.247225   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:12:16.259476   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.324327   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.324354   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:12:16.371331   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.384042   49708 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:16.384367   49708 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:12:17.654459   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.429657283s)
	I1024 20:12:17.654516   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.654529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.654951   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.654978   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.654990   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.655004   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.655016   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.655330   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.655353   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.672310   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.672337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.672693   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.672738   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.672761   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.138719   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.879209719s)
	I1024 20:12:18.138769   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.138783   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.139091   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139103   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139117   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.139132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139322   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139338   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139338   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.203722   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.832303736s)
	I1024 20:12:18.203776   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.203793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204088   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204106   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204118   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.204128   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204348   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.204378   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204393   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204406   49708 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:13.290974   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.291494   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.291524   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.291402   50821 retry.go:31] will retry after 588.738796ms: waiting for machine to come up
	I1024 20:12:13.882058   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.882661   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.882685   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.882577   50821 retry.go:31] will retry after 626.457323ms: waiting for machine to come up
	I1024 20:12:14.510560   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:14.511120   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:14.511159   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:14.511059   50821 retry.go:31] will retry after 848.521213ms: waiting for machine to come up
	I1024 20:12:15.360917   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:15.361423   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:15.361452   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:15.361397   50821 retry.go:31] will retry after 790.780783ms: waiting for machine to come up
	I1024 20:12:16.153815   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:16.154332   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:16.154364   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:16.154274   50821 retry.go:31] will retry after 1.066691012s: waiting for machine to come up
	I1024 20:12:17.222675   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:17.223280   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:17.223309   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:17.223248   50821 retry.go:31] will retry after 1.657285361s: waiting for machine to come up
	I1024 20:12:18.299768   49708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1024 20:12:16.196266   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.197531   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.397703   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:17.907894   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:18.029064   50077 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:18.029174   50077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.029196   50077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.029209   50077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.029219   50077 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.029403   50077 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 20:12:18.029418   50077 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.030719   50077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.030726   50077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.030730   50077 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 20:12:18.030748   50077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.030775   50077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.030801   50077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.030972   50077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.031077   50077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.180435   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.182586   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.185966   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1024 20:12:18.190926   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.196636   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.198176   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.205102   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.285789   50077 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1024 20:12:18.285837   50077 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.285889   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.356595   50077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1024 20:12:18.356639   50077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.356678   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.370773   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.387248   50077 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1024 20:12:18.387295   50077 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.387343   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.387461   50077 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1024 20:12:18.387488   50077 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1024 20:12:18.387530   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400566   50077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1024 20:12:18.400608   50077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.400647   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400660   50077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1024 20:12:18.400705   50077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.400742   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400754   50077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1024 20:12:18.400785   50077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.400812   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400845   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.400814   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.545451   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.545541   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1024 20:12:18.545587   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.545674   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.545724   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.545777   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1024 20:12:18.545734   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1024 20:12:18.683462   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 20:12:18.683513   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1024 20:12:18.683578   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.683656   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1024 20:12:18.683686   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1024 20:12:18.683732   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1024 20:12:18.688916   50077 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1024 20:12:18.688954   50077 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.689040   50077 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1024 20:12:20.355824   50077 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.666754363s)
	I1024 20:12:20.355859   50077 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1024 20:12:20.355920   50077 cache_images.go:92] LoadImages completed in 2.326833316s
	W1024 20:12:20.356004   50077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1024 20:12:20.356080   50077 ssh_runner.go:195] Run: crio config
	I1024 20:12:20.428753   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:20.428775   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:20.428793   50077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:20.428835   50077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467375 NodeName:old-k8s-version-467375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 20:12:20.429015   50077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467375"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467375
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.71:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:20.429115   50077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467375 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:20.429179   50077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1024 20:12:20.440158   50077 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:20.440239   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:20.450883   50077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1024 20:12:20.470913   50077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:20.490653   50077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1024 20:12:20.510287   50077 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:20.514815   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:20.526910   50077 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375 for IP: 192.168.39.71
	I1024 20:12:20.526943   50077 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:20.527172   50077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:20.527227   50077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:20.527313   50077 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.key
	I1024 20:12:20.527401   50077 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key.f4667c0f
	I1024 20:12:20.527458   50077 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key
	I1024 20:12:20.527617   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:20.527658   50077 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:20.527672   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:20.527712   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:20.527768   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:20.527803   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:20.527867   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:20.528563   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:20.561437   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:20.593396   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:20.626812   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 20:12:20.659073   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:20.690934   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:20.723550   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:20.754091   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:20.785078   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:20.813190   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:20.845338   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:20.876594   50077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:20.899560   50077 ssh_runner.go:195] Run: openssl version
	I1024 20:12:20.907482   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:20.922776   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929623   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929693   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.935454   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:20.947494   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:20.958906   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964115   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964177   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.970084   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:20.982477   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:20.995317   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000479   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000568   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.006797   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:21.020161   50077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:21.025037   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:21.033376   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:21.041858   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:21.050119   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:21.058140   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:21.066151   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:21.074299   50077 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:21.074409   50077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:21.074454   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:21.125486   50077 cri.go:89] found id: ""
	I1024 20:12:21.125559   50077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:21.139034   50077 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:21.139058   50077 kubeadm.go:636] restartCluster start
	I1024 20:12:21.139113   50077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:21.151994   50077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.153569   50077 kubeconfig.go:92] found "old-k8s-version-467375" server: "https://192.168.39.71:8443"
	I1024 20:12:21.157114   50077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:21.169908   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.169998   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.186116   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.186138   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.186187   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.201283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.702002   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.702084   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.717499   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.201839   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.201946   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.217814   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.702454   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.702525   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.720944   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:18.882382   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:18.882833   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:18.882869   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:18.882798   50821 retry.go:31] will retry after 1.854607935s: waiting for machine to come up
	I1024 20:12:20.738594   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:20.739327   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:20.739375   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:20.739255   50821 retry.go:31] will retry after 2.774006375s: waiting for machine to come up
	I1024 20:12:18.891092   49708 addons.go:502] enable addons completed in 2.901476764s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1024 20:12:20.898330   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:22.897985   49708 node_ready.go:49] node "default-k8s-diff-port-643126" has status "Ready":"True"
	I1024 20:12:22.898016   49708 node_ready.go:38] duration metric: took 6.51394456s waiting for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:22.898029   49708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:22.907326   49708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915330   49708 pod_ready.go:92] pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:22.915354   49708 pod_ready.go:81] duration metric: took 7.999933ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915366   49708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:20.698011   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.195726   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.201529   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.201620   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.215098   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.701482   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.701572   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.715481   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.201550   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.201610   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.218008   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.701489   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.701591   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.716960   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.201492   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.201558   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.215972   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.701398   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.701506   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.714016   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.201948   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.202018   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.215403   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.701876   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.701948   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.714598   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.202095   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.202161   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.215728   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.702476   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.702589   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.715925   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.514310   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:23.514813   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:23.514850   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:23.514763   50821 retry.go:31] will retry after 3.277478612s: waiting for machine to come up
	I1024 20:12:26.793845   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:26.794291   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:26.794312   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:26.794249   50821 retry.go:31] will retry after 4.518205069s: waiting for machine to come up
	I1024 20:12:24.934951   49708 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.934977   49708 pod_ready.go:81] duration metric: took 2.019602232s waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.934990   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940403   49708 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.940424   49708 pod_ready.go:81] duration metric: took 5.425415ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940437   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805106   49708 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:25.805127   49708 pod_ready.go:81] duration metric: took 864.682784ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805137   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.096987   49708 pod_ready.go:92] pod "kube-proxy-x4zbh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.097025   49708 pod_ready.go:81] duration metric: took 291.86715ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.097040   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497404   49708 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.497425   49708 pod_ready.go:81] duration metric: took 400.376909ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497444   49708 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.694439   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.192955   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.201919   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.201990   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.215407   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:28.701578   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.701658   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.714135   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.202433   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.202553   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.214936   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.702439   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.702499   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.714852   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.202428   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.202500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.214283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.702441   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.702500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.715562   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:31.170652   50077 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:31.170682   50077 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:31.170693   50077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:31.170772   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:31.231971   50077 cri.go:89] found id: ""
	I1024 20:12:31.232068   50077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:31.249451   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:31.261057   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:31.261124   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270878   50077 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270901   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:31.407803   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.357283   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.567466   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.659297   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.745553   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:32.745629   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:32.761052   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:31.314269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314887   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has current primary IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314912   49071 main.go:141] libmachine: (no-preload-014826) Found IP for machine: 192.168.50.162
	I1024 20:12:31.314926   49071 main.go:141] libmachine: (no-preload-014826) Reserving static IP address...
	I1024 20:12:31.315396   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.315434   49071 main.go:141] libmachine: (no-preload-014826) DBG | skip adding static IP to network mk-no-preload-014826 - found existing host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"}
	I1024 20:12:31.315448   49071 main.go:141] libmachine: (no-preload-014826) Reserved static IP address: 192.168.50.162
	I1024 20:12:31.315465   49071 main.go:141] libmachine: (no-preload-014826) Waiting for SSH to be available...
	I1024 20:12:31.315483   49071 main.go:141] libmachine: (no-preload-014826) DBG | Getting to WaitForSSH function...
	I1024 20:12:31.318209   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318611   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.318653   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318819   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH client type: external
	I1024 20:12:31.318871   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa (-rw-------)
	I1024 20:12:31.318916   49071 main.go:141] libmachine: (no-preload-014826) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:31.318941   49071 main.go:141] libmachine: (no-preload-014826) DBG | About to run SSH command:
	I1024 20:12:31.318957   49071 main.go:141] libmachine: (no-preload-014826) DBG | exit 0
	I1024 20:12:31.414054   49071 main.go:141] libmachine: (no-preload-014826) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:31.414566   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetConfigRaw
	I1024 20:12:31.415326   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.418120   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418549   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.418582   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418808   49071 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/config.json ...
	I1024 20:12:31.419009   49071 machine.go:88] provisioning docker machine ...
	I1024 20:12:31.419033   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:31.419222   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419399   49071 buildroot.go:166] provisioning hostname "no-preload-014826"
	I1024 20:12:31.419423   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419578   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.421861   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422241   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.422273   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422501   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.422676   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.422847   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.423066   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.423250   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.423707   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.423724   49071 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-014826 && echo "no-preload-014826" | sudo tee /etc/hostname
	I1024 20:12:31.557472   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-014826
	
	I1024 20:12:31.557504   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.560529   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.560928   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.560979   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.561201   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.561457   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561654   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561817   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.561968   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.562329   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.562357   49071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014826/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:31.694896   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:31.694927   49071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:31.694948   49071 buildroot.go:174] setting up certificates
	I1024 20:12:31.694959   49071 provision.go:83] configureAuth start
	I1024 20:12:31.694967   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.695264   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.697858   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698148   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.698176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698357   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.700982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701332   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.701364   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701570   49071 provision.go:138] copyHostCerts
	I1024 20:12:31.701625   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:31.701642   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:31.701733   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:31.701845   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:31.701857   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:31.701883   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:31.701947   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:31.701956   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:31.701978   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:31.702043   49071 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.no-preload-014826 san=[192.168.50.162 192.168.50.162 localhost 127.0.0.1 minikube no-preload-014826]
	I1024 20:12:31.798568   49071 provision.go:172] copyRemoteCerts
	I1024 20:12:31.798622   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:31.798642   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.801859   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802237   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.802269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802465   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.802672   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.802867   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.803027   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:31.891633   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:31.916451   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 20:12:31.937924   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:12:31.961360   49071 provision.go:86] duration metric: configureAuth took 266.390893ms
	I1024 20:12:31.961384   49071 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:31.961573   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:31.961660   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.964354   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964662   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.964719   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.965002   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965170   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965329   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.965516   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.965961   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.965983   49071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:32.275884   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:32.275911   49071 machine.go:91] provisioned docker machine in 856.887593ms
	I1024 20:12:32.275923   49071 start.go:300] post-start starting for "no-preload-014826" (driver="kvm2")
	I1024 20:12:32.275935   49071 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:32.275957   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.276268   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:32.276298   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.279248   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279642   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.279678   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.279985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.280182   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.280455   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.371931   49071 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:32.375989   49071 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:32.376009   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:32.376077   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:32.376173   49071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:32.376295   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:32.385018   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:32.408697   49071 start.go:303] post-start completed in 132.759815ms
	I1024 20:12:32.408719   49071 fix.go:56] fixHost completed within 21.530244363s
	I1024 20:12:32.408744   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.411800   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412155   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.412189   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412363   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.412574   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412741   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412916   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.413083   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:32.413469   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:32.413483   49071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:32.534092   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178352.477877903
	
	I1024 20:12:32.534116   49071 fix.go:206] guest clock: 1698178352.477877903
	I1024 20:12:32.534127   49071 fix.go:219] Guest: 2023-10-24 20:12:32.477877903 +0000 UTC Remote: 2023-10-24 20:12:32.408724059 +0000 UTC m=+364.183674654 (delta=69.153844ms)
	I1024 20:12:32.534153   49071 fix.go:190] guest clock delta is within tolerance: 69.153844ms
	I1024 20:12:32.534159   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 21.655714466s
	I1024 20:12:32.534185   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.534468   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:32.537523   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.537932   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.537961   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.538160   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538690   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538919   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.539004   49071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:32.539089   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.539138   49071 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:32.539166   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.542176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542308   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542652   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542689   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542714   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542732   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542981   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.542985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.543207   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543214   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543387   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543429   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543573   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.543579   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.631242   49071 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:32.657695   49071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:32.808471   49071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:32.815640   49071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:32.815712   49071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:32.830198   49071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:32.830219   49071 start.go:472] detecting cgroup driver to use...
	I1024 20:12:32.830295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:32.845231   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:32.863283   49071 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:32.863328   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:32.878295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:32.894182   49071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:33.024491   49071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:33.156548   49071 docker.go:214] disabling docker service ...
	I1024 20:12:33.156621   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:33.169940   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:33.182368   49071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:28.804366   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.806145   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.806217   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.193022   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.195173   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:33.297156   49071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:33.434526   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:33.453482   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:33.471594   49071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:12:33.471665   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.481491   49071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:33.481563   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.490505   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.500003   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.509825   49071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:33.524014   49071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:33.532876   49071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:33.532936   49071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:33.545922   49071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:33.554519   49071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:33.661858   49071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:33.867286   49071 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:33.867361   49071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:33.873180   49071 start.go:540] Will wait 60s for crictl version
	I1024 20:12:33.873259   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:33.877238   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:33.918479   49071 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:33.918624   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:33.970986   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:34.026667   49071 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:12:33.278190   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:33.777448   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.277381   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.320204   50077 api_server.go:72] duration metric: took 1.574651034s to wait for apiserver process to appear ...
	I1024 20:12:34.320230   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:34.320258   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.320744   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.320773   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.321162   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.821724   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.028144   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:34.031311   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031699   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:34.031733   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031888   49071 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:34.036386   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:34.052307   49071 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:12:34.052360   49071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:34.099209   49071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:12:34.099236   49071 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:34.099291   49071 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.099414   49071 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.099497   49071 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 20:12:34.099512   49071 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.099547   49071 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.099575   49071 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101069   49071 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.101083   49071 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.101096   49071 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 20:12:34.101077   49071 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101135   49071 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.101147   49071 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.101173   49071 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.101428   49071 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.283586   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.292930   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.294280   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.303296   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1024 20:12:34.314337   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.323356   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.327726   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.373724   49071 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1024 20:12:34.373774   49071 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.373819   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.466499   49071 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1024 20:12:34.466540   49071 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.466582   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.487167   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.489929   49071 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1024 20:12:34.489986   49071 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.490027   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588137   49071 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1024 20:12:34.588178   49071 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.588206   49071 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1024 20:12:34.588231   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588248   49071 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.588286   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588308   49071 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1024 20:12:34.588330   49071 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.588340   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.588358   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588388   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.588410   49071 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1024 20:12:34.588427   49071 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.588447   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588448   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.605099   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.693897   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.694097   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1024 20:12:34.694204   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.707142   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.707184   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.707265   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1024 20:12:34.707388   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:34.707384   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1024 20:12:34.707516   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:34.722106   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1024 20:12:34.722205   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:34.776997   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1024 20:12:34.777019   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777067   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777089   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1024 20:12:34.777180   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:34.804122   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1024 20:12:34.804241   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:34.814486   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 20:12:34.814532   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1024 20:12:34.814567   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1024 20:12:34.814607   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1024 20:12:34.814634   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:38.115460   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (3.338366217s)
	I1024 20:12:38.115492   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1024 20:12:38.115516   49071 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115548   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (3.338341429s)
	I1024 20:12:38.115570   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115586   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1024 20:12:38.115618   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3: (3.311351093s)
	I1024 20:12:38.115644   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1024 20:12:38.115650   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.30100028s)
	I1024 20:12:38.115665   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1024 20:12:34.807460   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.307370   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:34.696540   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.192160   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.822511   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1024 20:12:39.822561   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.734083   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.734125   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.734161   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.777985   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.778037   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.822134   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.042292   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.042343   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.321887   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.363625   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.363682   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.821995   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.828080   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.828114   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:42.321381   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:42.331626   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:12:42.342584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:12:42.342614   50077 api_server.go:131] duration metric: took 8.022377051s to wait for apiserver health ...
	I1024 20:12:42.342626   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:42.342634   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:42.344676   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:42.346118   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:42.363399   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:42.389481   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:42.403326   50077 system_pods.go:59] 7 kube-system pods found
	I1024 20:12:42.403370   50077 system_pods.go:61] "coredns-5644d7b6d9-x567q" [1dc7f1c2-4997-4330-a9bc-b914b1c1db9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:42.403381   50077 system_pods.go:61] "etcd-old-k8s-version-467375" [62c8ab28-033f-43fa-96b2-e127d8d46730] Running
	I1024 20:12:42.403389   50077 system_pods.go:61] "kube-apiserver-old-k8s-version-467375" [87c58a79-9f12-4be3-a450-69aa22674541] Running
	I1024 20:12:42.403398   50077 system_pods.go:61] "kube-controller-manager-old-k8s-version-467375" [6bf66f9f-1431-4b3f-b186-528945c54a63] Running
	I1024 20:12:42.403412   50077 system_pods.go:61] "kube-proxy-jdvck" [d35f42b9-9be8-43ee-8434-3d557e31bfde] Running
	I1024 20:12:42.403418   50077 system_pods.go:61] "kube-scheduler-old-k8s-version-467375" [63ae0d31-ace3-4490-a2e8-ed110e3a1072] Running
	I1024 20:12:42.403424   50077 system_pods.go:61] "storage-provisioner" [9105f8d8-3aa1-422d-acf2-9f83e9ede8af] Running
	I1024 20:12:42.403431   50077 system_pods.go:74] duration metric: took 13.927429ms to wait for pod list to return data ...
	I1024 20:12:42.403440   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:42.408844   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:42.408890   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:42.408905   50077 node_conditions.go:105] duration metric: took 5.459392ms to run NodePressure ...
	I1024 20:12:42.408926   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:42.701645   50077 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:42.707084   50077 retry.go:31] will retry after 366.455415ms: kubelet not initialised
	I1024 20:12:39.807495   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:42.306172   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.193434   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:41.195135   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.694847   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.078083   50077 retry.go:31] will retry after 411.231242ms: kubelet not initialised
	I1024 20:12:43.494711   50077 retry.go:31] will retry after 768.972767ms: kubelet not initialised
	I1024 20:12:44.268690   50077 retry.go:31] will retry after 693.655783ms: kubelet not initialised
	I1024 20:12:45.186580   50077 retry.go:31] will retry after 1.610937297s: kubelet not initialised
	I1024 20:12:46.803897   50077 retry.go:31] will retry after 959.133509ms: kubelet not initialised
	I1024 20:12:47.768260   50077 retry.go:31] will retry after 1.51466069s: kubelet not initialised
	I1024 20:12:45.464752   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.34915976s)
	I1024 20:12:45.464779   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1024 20:12:45.464821   49071 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:45.464899   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:46.936699   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.471766425s)
	I1024 20:12:46.936725   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1024 20:12:46.936750   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:46.936790   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:44.806094   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:46.807137   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:45.696196   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:48.192732   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:49.288179   50077 retry.go:31] will retry after 5.048749504s: kubelet not initialised
	I1024 20:12:49.615688   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.678859869s)
	I1024 20:12:49.615726   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1024 20:12:49.615763   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:49.615840   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:51.387159   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.771279542s)
	I1024 20:12:51.387185   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1024 20:12:51.387209   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:51.387258   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:52.868127   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.480840395s)
	I1024 20:12:52.868158   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1024 20:12:52.868184   49071 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:52.868233   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:49.304156   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:51.305456   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:53.307726   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:50.195756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:52.196133   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.342759   50077 retry.go:31] will retry after 8.402807892s: kubelet not initialised
	I1024 20:12:53.617841   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1024 20:12:53.617883   49071 cache_images.go:123] Successfully loaded all cached images
	I1024 20:12:53.617889   49071 cache_images.go:92] LoadImages completed in 19.518639759s
	I1024 20:12:53.617972   49071 ssh_runner.go:195] Run: crio config
	I1024 20:12:53.677157   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:12:53.677181   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:53.677198   49071 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:53.677215   49071 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.162 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014826 NodeName:no-preload-014826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:12:53.677386   49071 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014826"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:53.677482   49071 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-014826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:53.677552   49071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:12:53.688840   49071 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:53.688904   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:53.700095   49071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:12:53.717176   49071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:53.737316   49071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1024 20:12:53.756100   49071 ssh_runner.go:195] Run: grep 192.168.50.162	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:53.760013   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:53.771571   49071 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826 for IP: 192.168.50.162
	I1024 20:12:53.771601   49071 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:53.771752   49071 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:53.771811   49071 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:53.771896   49071 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.key
	I1024 20:12:53.771975   49071 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key.1b8245f8
	I1024 20:12:53.772056   49071 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key
	I1024 20:12:53.772205   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:53.772250   49071 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:53.772262   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:53.772303   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:53.772333   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:53.772354   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:53.772397   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:53.773081   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:53.797387   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:53.822084   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:53.846401   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:12:53.869361   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:53.891519   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:53.914051   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:53.935925   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:53.958389   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:53.982011   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:54.005921   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:54.029793   49071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:54.047319   49071 ssh_runner.go:195] Run: openssl version
	I1024 20:12:54.053493   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:54.064414   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069060   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069115   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.075137   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:54.088046   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:54.099949   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104810   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104867   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.110617   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:54.122160   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:54.133062   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137858   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137922   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.144146   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:54.155998   49071 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:54.160989   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:54.167441   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:54.173797   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:54.180320   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:54.186876   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:54.193624   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:54.200066   49071 kubeadm.go:404] StartCluster: {Name:no-preload-014826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:54.200165   49071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:54.200202   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:54.253207   49071 cri.go:89] found id: ""
	I1024 20:12:54.253267   49071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:54.264316   49071 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:54.264348   49071 kubeadm.go:636] restartCluster start
	I1024 20:12:54.264404   49071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:54.276382   49071 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.277506   49071 kubeconfig.go:92] found "no-preload-014826" server: "https://192.168.50.162:8443"
	I1024 20:12:54.279888   49071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:54.290005   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.290052   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.302383   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.302400   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.302447   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.315130   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.815483   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.815574   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.827862   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.315372   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.315430   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.328409   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.816079   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.816141   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.829755   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.315782   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.315869   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.329006   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.815526   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.815621   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.828167   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.315692   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.315781   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.328590   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.816175   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.816250   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.832014   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.805830   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.810013   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.692702   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.192210   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.750533   50077 retry.go:31] will retry after 7.667287878s: kubelet not initialised
	I1024 20:12:58.315841   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.315922   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.329743   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:58.815711   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.815779   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.828215   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.315817   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.315924   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.328911   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.815493   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.815583   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.829684   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.316215   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.316294   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.330227   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.815830   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.815901   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.828290   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.315228   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.315319   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.329972   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.815426   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.815495   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.829199   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.315754   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.315834   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.328463   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.816091   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.816175   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.830548   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.304116   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.304336   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:59.193761   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:01.692343   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.693961   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.315186   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.315249   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.327729   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:03.815302   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.815389   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.827308   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:04.290952   49071 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:13:04.290993   49071 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:13:04.291005   49071 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:13:04.291078   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:13:04.333468   49071 cri.go:89] found id: ""
	I1024 20:13:04.333543   49071 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:13:04.351889   49071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:13:04.362176   49071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:13:04.362251   49071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372650   49071 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372683   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:04.495803   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.080838   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.290640   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.379839   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.458741   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:13:05.458843   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.475039   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.997438   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.496596   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.996587   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.496933   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.514268   49071 api_server.go:72] duration metric: took 2.055524654s to wait for apiserver process to appear ...
	I1024 20:13:07.514294   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:13:07.514310   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.514802   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:07.514840   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.515243   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:08.015912   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:04.306097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:06.805484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:05.698099   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:08.196336   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.424613   50077 retry.go:31] will retry after 17.161095389s: kubelet not initialised
	I1024 20:13:12.512885   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.512923   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.512936   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.564368   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.564415   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.564435   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.578188   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.578210   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:13.015415   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.022900   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.022939   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:09.305906   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:11.805107   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.693989   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:12.696233   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:13.515731   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.520510   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.520565   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:14.015693   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:14.021308   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:13:14.029247   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:13:14.029271   49071 api_server.go:131] duration metric: took 6.514969351s to wait for apiserver health ...
	I1024 20:13:14.029281   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:13:14.029289   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:13:14.031023   49071 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:13:14.032390   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:13:14.042542   49071 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:13:14.061827   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:13:14.077006   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:13:14.077041   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:13:14.077058   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:13:14.077068   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:13:14.077078   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:13:14.077088   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:13:14.077102   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:13:14.077114   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:13:14.077125   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:13:14.077140   49071 system_pods.go:74] duration metric: took 15.296766ms to wait for pod list to return data ...
	I1024 20:13:14.077150   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:13:14.080871   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:13:14.080896   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:13:14.080908   49071 node_conditions.go:105] duration metric: took 3.7473ms to run NodePressure ...
	I1024 20:13:14.080921   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:14.292868   49071 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297583   49071 kubeadm.go:787] kubelet initialised
	I1024 20:13:14.297611   49071 kubeadm.go:788] duration metric: took 4.717728ms waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297621   49071 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:14.303742   49071 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.309570   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309600   49071 pod_ready.go:81] duration metric: took 5.835917ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.309608   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309616   49071 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.316423   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316453   49071 pod_ready.go:81] duration metric: took 6.829373ms waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.316577   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316593   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.325238   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325271   49071 pod_ready.go:81] duration metric: took 8.669582ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.325280   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325288   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.466293   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466319   49071 pod_ready.go:81] duration metric: took 141.023699ms waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.466331   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466342   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.865820   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865855   49071 pod_ready.go:81] duration metric: took 399.504017ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.865867   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865876   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.266786   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266820   49071 pod_ready.go:81] duration metric: took 400.936146ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.266833   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266844   49071 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.666547   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666582   49071 pod_ready.go:81] duration metric: took 399.72944ms waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.666596   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666617   49071 pod_ready.go:38] duration metric: took 1.368975115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:15.666636   49071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:13:15.686675   49071 ops.go:34] apiserver oom_adj: -16
	I1024 20:13:15.686696   49071 kubeadm.go:640] restartCluster took 21.422341568s
	I1024 20:13:15.686706   49071 kubeadm.go:406] StartCluster complete in 21.486646231s
	I1024 20:13:15.686737   49071 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.686823   49071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:13:15.688903   49071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.689192   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:13:15.689321   49071 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:13:15.689405   49071 addons.go:69] Setting storage-provisioner=true in profile "no-preload-014826"
	I1024 20:13:15.689423   49071 addons.go:231] Setting addon storage-provisioner=true in "no-preload-014826"
	I1024 20:13:15.689462   49071 addons.go:69] Setting metrics-server=true in profile "no-preload-014826"
	I1024 20:13:15.689490   49071 addons.go:231] Setting addon metrics-server=true in "no-preload-014826"
	W1024 20:13:15.689512   49071 addons.go:240] addon metrics-server should already be in state true
	I1024 20:13:15.689560   49071 host.go:66] Checking if "no-preload-014826" exists ...
	W1024 20:13:15.689463   49071 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:13:15.689649   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.689445   49071 addons.go:69] Setting default-storageclass=true in profile "no-preload-014826"
	I1024 20:13:15.689716   49071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014826"
	I1024 20:13:15.689431   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:13:15.690018   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690051   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690060   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690086   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690173   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690225   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.695832   49071 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-014826" context rescaled to 1 replicas
	I1024 20:13:15.695868   49071 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:13:15.698104   49071 out.go:177] * Verifying Kubernetes components...
	I1024 20:13:15.701812   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:13:15.708637   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I1024 20:13:15.709086   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.709579   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I1024 20:13:15.709941   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.709959   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710044   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.710478   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.710629   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.710640   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710943   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.710954   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711125   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.711367   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1024 20:13:15.711702   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.711739   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711852   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.712441   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.712453   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.713081   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.713312   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.717141   49071 addons.go:231] Setting addon default-storageclass=true in "no-preload-014826"
	W1024 20:13:15.717173   49071 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:13:15.717201   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.717655   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.717688   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.729423   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I1024 20:13:15.730145   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.730747   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.730763   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.730811   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1024 20:13:15.731224   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.731294   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.731487   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.731691   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.731704   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.732239   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.732712   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.733909   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736374   49071 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:13:15.734682   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736231   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1024 20:13:15.738165   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:13:15.738178   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:13:15.738198   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739819   49071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:13:15.741717   49071 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.741733   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:13:15.741752   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739693   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.742202   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.742374   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.742389   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.742978   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.743000   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.743088   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.743253   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.743408   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.743896   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.744551   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.745028   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.745145   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745266   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.745462   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.745486   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745735   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.745870   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.745956   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.746023   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.782650   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I1024 20:13:15.783126   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.783699   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.783721   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.784051   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.784270   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.786114   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.786409   49071 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.786424   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:13:15.786439   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.788982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789347   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.789376   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789622   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.789838   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.790047   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.790195   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.870753   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:13:15.870771   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:13:15.893772   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:13:15.893799   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:13:15.916179   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.928570   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.928596   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:13:15.950610   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.987129   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.987945   49071 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:13:15.987993   49071 node_ready.go:35] waiting up to 6m0s for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53431699s)
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.499892733s)
	I1024 20:13:17.450586   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450597   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.450609   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450621   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451126   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451143   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451152   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451160   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451176   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451180   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451186   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451190   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451200   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451211   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451380   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451410   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451415   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451429   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451430   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451442   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.464276   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.464297   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.464561   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.464578   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.464585   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626276   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.639098267s)
	I1024 20:13:17.626344   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626364   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.626686   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.626711   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.626713   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626765   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626779   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.627054   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.627071   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.627082   49071 addons.go:467] Verifying addon metrics-server=true in "no-preload-014826"
	I1024 20:13:17.629289   49071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:13:17.630781   49071 addons.go:502] enable addons completed in 1.94145774s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:13:18.084997   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:13.805526   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.807970   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:18.305400   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.194668   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:17.694096   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:20.085063   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:22.086260   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:23.087300   49071 node_ready.go:49] node "no-preload-014826" has status "Ready":"True"
	I1024 20:13:23.087338   49071 node_ready.go:38] duration metric: took 7.0993157s waiting for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:23.087350   49071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:23.093785   49071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101553   49071 pod_ready.go:92] pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:23.101576   49071 pod_ready.go:81] duration metric: took 7.766543ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101588   49071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:20.808097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:23.306150   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:19.696002   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:22.195097   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.592041   50077 kubeadm.go:787] kubelet initialised
	I1024 20:13:27.592064   50077 kubeadm.go:788] duration metric: took 44.890387595s waiting for restarted kubelet to initialise ...
	I1024 20:13:27.592071   50077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:27.596611   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601949   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.601972   50077 pod_ready.go:81] duration metric: took 5.342417ms waiting for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601979   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607096   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.607118   50077 pod_ready.go:81] duration metric: took 5.132259ms waiting for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607130   50077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.611971   50077 pod_ready.go:92] pod "etcd-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.611991   50077 pod_ready.go:81] duration metric: took 4.854068ms waiting for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.612002   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.616975   50077 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.616995   50077 pod_ready.go:81] duration metric: took 4.985984ms waiting for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.617006   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620272   49071 pod_ready.go:92] pod "etcd-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.620294   49071 pod_ready.go:81] duration metric: took 1.518699618s waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620304   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625954   49071 pod_ready.go:92] pod "kube-apiserver-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.625975   49071 pod_ready.go:81] duration metric: took 5.666043ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625985   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096309   49071 pod_ready.go:92] pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.096338   49071 pod_ready.go:81] duration metric: took 2.470345358s waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096363   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101417   49071 pod_ready.go:92] pod "kube-proxy-hvphg" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.101439   49071 pod_ready.go:81] duration metric: took 5.060638ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101457   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487627   49071 pod_ready.go:92] pod "kube-scheduler-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.487655   49071 pod_ready.go:81] duration metric: took 386.189892ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487668   49071 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:25.805375   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:28.304314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:24.199489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:26.694339   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.990781   50077 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.990808   50077 pod_ready.go:81] duration metric: took 373.794401ms waiting for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.990817   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389532   50077 pod_ready.go:92] pod "kube-proxy-jdvck" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.389554   50077 pod_ready.go:81] duration metric: took 398.730628ms waiting for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389562   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791217   50077 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.791245   50077 pod_ready.go:81] duration metric: took 401.675656ms waiting for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791259   50077 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:31.101273   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.797752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.294823   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:30.305423   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.804966   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.196181   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:31.694405   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:33.597846   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.098571   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.295326   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.295502   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:35.307544   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:37.804734   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.193583   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.194545   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.693640   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.598114   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.295582   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.797360   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.303674   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:42.305932   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:41.193409   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.694630   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.097684   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.599550   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.295412   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.295801   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.795437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:44.806885   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.305513   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.695737   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.194597   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.098390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.098465   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.598464   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.796354   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.296299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.806019   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.304671   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.692678   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.693810   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.099808   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.596982   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.795042   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.795788   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.305480   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.805003   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.192666   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.192992   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.598091   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:02.097277   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.296748   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.799381   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.304665   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.305140   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.193682   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.694286   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.098871   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.598019   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.297114   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.796174   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:03.804391   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:05.805262   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.304535   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.692751   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.693756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.598278   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.598744   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:09.296355   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.794188   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.805023   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.304639   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.193179   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.696086   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.097069   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.598606   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.795184   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.797064   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.804980   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.304229   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:16.193316   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.193452   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.099418   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.597767   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.598478   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.294610   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.295299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.295580   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.304386   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.304955   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.693442   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.695298   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.598688   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.098094   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.796039   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.294583   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.804411   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:26.805975   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:25.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.194309   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.098448   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.597809   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.295004   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.296770   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.302945   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.303224   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.305333   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.693713   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.693887   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.695638   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.599337   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.098527   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.795335   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.796128   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.798347   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.307171   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.806058   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.192382   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.195932   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.098830   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.598203   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.295075   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.796827   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.304919   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.805069   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.693934   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.694102   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.598267   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.097792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:45.297437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.795616   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.805647   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:46.806849   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.695195   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.194156   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.597390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.099367   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:50.294686   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.297230   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.306571   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.804484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.194481   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.693650   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.694257   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.597760   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.597897   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.794752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.795666   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.805053   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.303997   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.304326   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.693506   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.098488   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.098937   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.297834   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.795492   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.305557   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:02.805113   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.694107   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.194559   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.597853   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.598764   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.798231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.296567   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:04.805204   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.806277   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.693959   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.194793   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.098369   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.099343   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.597632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.795941   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.295163   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:09.303880   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.308399   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.692947   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.694115   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.098788   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.297546   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.799219   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.804941   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.805508   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.805620   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.194071   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.692344   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.099461   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.598528   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:18.294855   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.795197   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.303894   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.807109   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:19.693273   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:21.694158   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.694489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:24.598739   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.610829   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.295231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.296151   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.794796   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.304009   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.304056   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:28.692475   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.097722   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.099314   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.795050   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.795981   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.304915   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.306232   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:30.693731   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.193919   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.100924   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.597972   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:37.598135   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:34.295967   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.297180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.809488   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.305924   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.696190   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.193380   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:42.597443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.794953   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.794982   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.806251   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:41.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.694041   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.192299   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:44.598402   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.097519   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.294813   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.297991   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.794454   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.803978   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.804440   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.805016   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.192754   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.693494   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.098171   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.598327   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.795988   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.296853   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.806503   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.807986   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:50.193124   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.692831   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.097085   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.600496   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.795189   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.795825   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.304728   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.305314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.696873   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:57.193194   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.098128   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.099894   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.295180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.295325   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:58.804230   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:00.804430   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.303762   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.193752   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.194280   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.694730   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.597363   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.598434   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.599790   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.295998   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.298356   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.795402   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.305076   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.805412   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:04.884378   49198 pod_ready.go:81] duration metric: took 4m0.000380407s waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:04.884408   49198 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:04.884437   49198 pod_ready.go:38] duration metric: took 4m3.201253081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:04.884459   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:04.884488   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:04.884542   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:04.941853   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:04.941878   49198 cri.go:89] found id: ""
	I1024 20:16:04.941889   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:04.941963   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.947250   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:04.947317   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:04.990126   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:04.990151   49198 cri.go:89] found id: ""
	I1024 20:16:04.990163   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:04.990226   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.995026   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:04.995086   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:05.045422   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.045441   49198 cri.go:89] found id: ""
	I1024 20:16:05.045449   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:05.045505   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.049931   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:05.049997   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:05.115746   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.115767   49198 cri.go:89] found id: ""
	I1024 20:16:05.115775   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:05.115822   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.120476   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:05.120527   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:05.163487   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.163509   49198 cri.go:89] found id: ""
	I1024 20:16:05.163521   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:05.163580   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.167956   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:05.168027   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:05.209375   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.209403   49198 cri.go:89] found id: ""
	I1024 20:16:05.209412   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:05.209468   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.213932   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:05.213994   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:05.256033   49198 cri.go:89] found id: ""
	I1024 20:16:05.256055   49198 logs.go:284] 0 containers: []
	W1024 20:16:05.256070   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:05.256077   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:05.256130   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:05.313137   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.313163   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:05.313171   49198 cri.go:89] found id: ""
	I1024 20:16:05.313181   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:05.313236   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.319603   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.324116   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:05.324138   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.364879   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:05.364905   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.430314   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:05.430342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:05.488524   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:05.488550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:05.547000   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:05.547029   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:05.561360   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:05.561392   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:05.616215   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:05.616254   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.666923   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:05.666955   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.707305   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:05.707332   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:05.865943   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:05.865972   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.914044   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:05.914070   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:06.370658   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:06.370692   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:06.423891   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:06.423919   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:10.098187   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:12.597089   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.796035   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.796300   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.805755   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.806246   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:08.967015   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:08.982371   49198 api_server.go:72] duration metric: took 4m12.675281905s to wait for apiserver process to appear ...
	I1024 20:16:08.982397   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:08.982431   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:08.982492   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:09.023557   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:09.023575   49198 cri.go:89] found id: ""
	I1024 20:16:09.023582   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:09.023626   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.029901   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:09.029954   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:09.066141   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.066169   49198 cri.go:89] found id: ""
	I1024 20:16:09.066181   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:09.066232   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.071099   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:09.071161   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:09.117898   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:09.117917   49198 cri.go:89] found id: ""
	I1024 20:16:09.117927   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:09.117979   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.122675   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:09.122729   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:09.162628   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.162647   49198 cri.go:89] found id: ""
	I1024 20:16:09.162656   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:09.162711   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.166799   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:09.166859   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:09.203866   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.203894   49198 cri.go:89] found id: ""
	I1024 20:16:09.203904   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:09.203968   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.208141   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:09.208201   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:09.252432   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.252449   49198 cri.go:89] found id: ""
	I1024 20:16:09.252457   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:09.252519   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.257709   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:09.257767   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:09.312883   49198 cri.go:89] found id: ""
	I1024 20:16:09.312908   49198 logs.go:284] 0 containers: []
	W1024 20:16:09.312919   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:09.312926   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:09.312984   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:09.365111   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:09.365138   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.365145   49198 cri.go:89] found id: ""
	I1024 20:16:09.365155   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:09.365215   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.370442   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.375055   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:09.375082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.440328   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:09.440361   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.489007   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:09.489035   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:09.539429   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:09.539467   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:09.591012   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:09.591049   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:09.608336   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:09.608362   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.656190   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:09.656216   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.704915   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:09.704942   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.743847   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:09.743878   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:10.154301   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:10.154342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:10.296525   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:10.296552   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:10.347731   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:10.347763   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:10.388130   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:10.388157   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:12.931381   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:16:12.938286   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:16:12.940208   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:12.940228   49198 api_server.go:131] duration metric: took 3.957823811s to wait for apiserver health ...
	I1024 20:16:12.940236   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:12.940255   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:12.940311   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:12.985630   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:12.985654   49198 cri.go:89] found id: ""
	I1024 20:16:12.985664   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:12.985736   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:12.991021   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:12.991094   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:13.031617   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:13.031638   49198 cri.go:89] found id: ""
	I1024 20:16:13.031647   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:13.031690   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.036956   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:13.037010   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:13.074663   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:13.074683   49198 cri.go:89] found id: ""
	I1024 20:16:13.074692   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:13.074745   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.079061   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:13.079115   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:13.122923   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.122947   49198 cri.go:89] found id: ""
	I1024 20:16:13.122957   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:13.123010   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.126914   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:13.126987   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:13.174746   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:13.174781   49198 cri.go:89] found id: ""
	I1024 20:16:13.174791   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:13.174867   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.179817   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:13.179884   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:13.228560   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.228588   49198 cri.go:89] found id: ""
	I1024 20:16:13.228606   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:13.228661   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.233182   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:13.233247   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:13.272072   49198 cri.go:89] found id: ""
	I1024 20:16:13.272100   49198 logs.go:284] 0 containers: []
	W1024 20:16:13.272110   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:13.272117   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:13.272174   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:13.317104   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:13.317129   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:13.317137   49198 cri.go:89] found id: ""
	I1024 20:16:13.317148   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:13.317208   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.327265   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.331706   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:13.331730   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.378259   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:13.378299   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:13.402257   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:13.402289   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:13.465655   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:13.465685   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.521268   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:13.521312   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:13.923501   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:13.923550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:13.976055   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:13.976082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:14.028953   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:14.028985   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:14.069859   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:14.069887   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:14.196920   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:14.196959   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:14.257588   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:14.257617   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:14.302980   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:14.303019   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:14.344441   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:14.344469   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:16.893365   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:16.893395   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.893404   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.893412   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.893419   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.893426   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.893433   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.893444   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.893456   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.893469   49198 system_pods.go:74] duration metric: took 3.953227014s to wait for pod list to return data ...
	I1024 20:16:16.893483   49198 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:16.895879   49198 default_sa.go:45] found service account: "default"
	I1024 20:16:16.895896   49198 default_sa.go:55] duration metric: took 2.405313ms for default service account to be created ...
	I1024 20:16:16.895903   49198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:16.902189   49198 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:16.902217   49198 system_pods.go:89] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.902225   49198 system_pods.go:89] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.902232   49198 system_pods.go:89] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.902240   49198 system_pods.go:89] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.902246   49198 system_pods.go:89] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.902253   49198 system_pods.go:89] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.902269   49198 system_pods.go:89] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.902281   49198 system_pods.go:89] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.902292   49198 system_pods.go:126] duration metric: took 6.383517ms to wait for k8s-apps to be running ...
	I1024 20:16:16.902303   49198 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:16.902359   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:16.920015   49198 system_svc.go:56] duration metric: took 17.706073ms WaitForService to wait for kubelet.
	I1024 20:16:16.920039   49198 kubeadm.go:581] duration metric: took 4m20.612955305s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:16.920063   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:16.924147   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:16.924167   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:16.924177   49198 node_conditions.go:105] duration metric: took 4.109839ms to run NodePressure ...
	I1024 20:16:16.924187   49198 start.go:228] waiting for startup goroutines ...
	I1024 20:16:16.924194   49198 start.go:233] waiting for cluster config update ...
	I1024 20:16:16.924206   49198 start.go:242] writing updated cluster config ...
	I1024 20:16:16.924490   49198 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:16.973588   49198 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:16.975639   49198 out.go:177] * Done! kubectl is now configured to use "embed-certs-867165" cluster and "default" namespace by default
	I1024 20:16:14.597646   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.598202   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.296652   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.795527   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.304610   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.305225   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.598694   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.099076   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.295897   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.804148   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:20.805158   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.598167   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.598533   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.598810   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.794690   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.796011   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.798006   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.803034   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:26.497612   49708 pod_ready.go:81] duration metric: took 4m0.000149915s waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:26.497657   49708 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:26.497666   49708 pod_ready.go:38] duration metric: took 4m3.599625321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:26.497682   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:26.497709   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:26.497757   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:26.569452   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:26.569479   49708 cri.go:89] found id: ""
	I1024 20:16:26.569489   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:26.569551   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.573824   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:26.573872   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:26.618910   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:26.618939   49708 cri.go:89] found id: ""
	I1024 20:16:26.618946   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:26.618998   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.623675   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:26.623723   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:26.671601   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:26.671621   49708 cri.go:89] found id: ""
	I1024 20:16:26.671628   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:26.671665   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.675997   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:26.676048   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:26.723100   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:26.723124   49708 cri.go:89] found id: ""
	I1024 20:16:26.723133   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:26.723187   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.727780   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:26.727837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:26.765584   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.765608   49708 cri.go:89] found id: ""
	I1024 20:16:26.765618   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:26.765663   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.770062   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:26.770121   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:26.811710   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:26.811728   49708 cri.go:89] found id: ""
	I1024 20:16:26.811736   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:26.811786   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.816125   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:26.816187   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:26.860427   49708 cri.go:89] found id: ""
	I1024 20:16:26.860452   49708 logs.go:284] 0 containers: []
	W1024 20:16:26.860462   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:26.860469   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:26.860532   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:26.905052   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:26.905083   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:26.905091   49708 cri.go:89] found id: ""
	I1024 20:16:26.905100   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:26.905154   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.909590   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.913618   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:26.913636   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.958127   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:26.958157   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:27.012523   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:27.012555   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:27.059311   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:27.059345   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:27.102879   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:27.102905   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:27.154377   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:27.154409   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:27.197488   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:27.197516   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:27.210530   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:27.210559   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:27.379195   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:27.379225   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:27.826087   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:27.826119   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:27.880305   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:27.880348   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:27.932382   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:27.932417   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:27.979060   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:27.979088   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:29.598843   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:31.598885   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.295090   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:32.295447   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.532134   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:30.547497   49708 api_server.go:72] duration metric: took 4m14.551629626s to wait for apiserver process to appear ...
	I1024 20:16:30.547522   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:30.547562   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:30.547627   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:30.588076   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.588097   49708 cri.go:89] found id: ""
	I1024 20:16:30.588104   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:30.588159   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.592397   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:30.592467   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:30.632362   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:30.632380   49708 cri.go:89] found id: ""
	I1024 20:16:30.632389   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:30.632446   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.636647   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:30.636695   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:30.676966   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:30.676997   49708 cri.go:89] found id: ""
	I1024 20:16:30.677005   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:30.677050   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.682153   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:30.682206   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:30.723427   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:30.723449   49708 cri.go:89] found id: ""
	I1024 20:16:30.723458   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:30.723516   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.727674   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:30.727740   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:30.774450   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:30.774473   49708 cri.go:89] found id: ""
	I1024 20:16:30.774482   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:30.774535   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.778753   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:30.778821   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:30.830068   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:30.830094   49708 cri.go:89] found id: ""
	I1024 20:16:30.830104   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:30.830169   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.835133   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:30.835201   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:30.885323   49708 cri.go:89] found id: ""
	I1024 20:16:30.885347   49708 logs.go:284] 0 containers: []
	W1024 20:16:30.885357   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:30.885363   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:30.885423   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:30.925415   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:30.925435   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:30.925440   49708 cri.go:89] found id: ""
	I1024 20:16:30.925447   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:30.925506   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.929723   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.933926   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:30.933965   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.999217   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:30.999250   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:31.051267   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:31.051300   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:31.107411   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:31.107444   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:31.233980   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:31.234009   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:31.275335   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:31.275362   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:31.329276   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:31.329316   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:31.380149   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:31.380184   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:31.393990   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:31.394016   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:31.440032   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:31.440065   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:31.478413   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:31.478445   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:31.529321   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:31.529349   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:31.578678   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:31.578708   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:33.603558   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.099473   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.295685   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.794759   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.514152   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:16:34.520578   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:16:34.522271   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:34.522289   49708 api_server.go:131] duration metric: took 3.974761353s to wait for apiserver health ...
	I1024 20:16:34.522297   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:34.522318   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:34.522363   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:34.568260   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:34.568280   49708 cri.go:89] found id: ""
	I1024 20:16:34.568287   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:34.568336   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.575356   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:34.575414   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:34.623358   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:34.623383   49708 cri.go:89] found id: ""
	I1024 20:16:34.623392   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:34.623449   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.628721   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:34.628777   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:34.675561   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:34.675583   49708 cri.go:89] found id: ""
	I1024 20:16:34.675592   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:34.675654   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.681613   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:34.681677   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:34.722858   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:34.722898   49708 cri.go:89] found id: ""
	I1024 20:16:34.722917   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:34.722974   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.727310   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:34.727376   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:34.768365   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:34.768383   49708 cri.go:89] found id: ""
	I1024 20:16:34.768390   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:34.768436   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.772776   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:34.772837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:34.825992   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:34.826020   49708 cri.go:89] found id: ""
	I1024 20:16:34.826030   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:34.826083   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.830957   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:34.831011   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:34.878138   49708 cri.go:89] found id: ""
	I1024 20:16:34.878167   49708 logs.go:284] 0 containers: []
	W1024 20:16:34.878175   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:34.878180   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:34.878235   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:34.929288   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.929321   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:34.929328   49708 cri.go:89] found id: ""
	I1024 20:16:34.929338   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:34.929391   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.933731   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.938300   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:34.938326   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.980919   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:34.980944   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:35.021465   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:35.021495   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:35.165907   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:35.165935   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:35.212733   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:35.212759   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:35.620344   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:35.620395   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:35.669555   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:35.669588   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:35.720959   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:35.720987   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:35.762823   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:35.762852   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:35.805994   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:35.806021   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:35.879019   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:35.879046   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:35.941760   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:35.941796   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:35.995475   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:35.995515   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:38.526080   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:38.526106   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.526114   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.526122   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.526128   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.526133   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.526139   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.526150   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.526159   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.526168   49708 system_pods.go:74] duration metric: took 4.003864797s to wait for pod list to return data ...
	I1024 20:16:38.526182   49708 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:38.528827   49708 default_sa.go:45] found service account: "default"
	I1024 20:16:38.528854   49708 default_sa.go:55] duration metric: took 2.662588ms for default service account to be created ...
	I1024 20:16:38.528863   49708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:38.534560   49708 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:38.534579   49708 system_pods.go:89] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.534585   49708 system_pods.go:89] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.534589   49708 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.534594   49708 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.534598   49708 system_pods.go:89] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.534602   49708 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.534610   49708 system_pods.go:89] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.534615   49708 system_pods.go:89] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.534622   49708 system_pods.go:126] duration metric: took 5.753846ms to wait for k8s-apps to be running ...
	I1024 20:16:38.534630   49708 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:38.534668   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:38.549835   49708 system_svc.go:56] duration metric: took 15.197069ms WaitForService to wait for kubelet.
	I1024 20:16:38.549856   49708 kubeadm.go:581] duration metric: took 4m22.553994431s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:38.549878   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:38.553043   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:38.553065   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:38.553076   49708 node_conditions.go:105] duration metric: took 3.193057ms to run NodePressure ...
	I1024 20:16:38.553086   49708 start.go:228] waiting for startup goroutines ...
	I1024 20:16:38.553091   49708 start.go:233] waiting for cluster config update ...
	I1024 20:16:38.553100   49708 start.go:242] writing updated cluster config ...
	I1024 20:16:38.553348   49708 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:38.601183   49708 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:38.603463   49708 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643126" cluster and "default" namespace by default
	I1024 20:16:38.597848   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:40.599437   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:38.795772   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:41.293845   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.096749   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.097165   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:47.097443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.298644   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.797003   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:49.097716   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:51.597754   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:48.295110   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:50.796361   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.600174   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:56.097860   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.295856   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:55.295890   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:57.795597   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:58.097890   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:00.598554   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:59.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:02.295268   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:03.098362   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:05.596632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:04.296575   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:06.296820   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.098450   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:10.597828   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:12.599199   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.795717   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:11.296662   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.097014   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.097844   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:13.794373   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.795134   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.795531   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.098039   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:21.098582   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.796588   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:22.296536   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:23.597792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.098066   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:24.795501   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.796240   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:27.488206   49071 pod_ready.go:81] duration metric: took 4m0.000518995s waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:27.488255   49071 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:27.488267   49071 pod_ready.go:38] duration metric: took 4m4.400905907s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:27.488288   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:17:27.488320   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:27.488379   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:27.544995   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:27.545022   49071 cri.go:89] found id: ""
	I1024 20:17:27.545033   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:27.545116   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.550068   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:27.550127   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:27.595184   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:27.595207   49071 cri.go:89] found id: ""
	I1024 20:17:27.595215   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:27.595265   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.600016   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:27.600075   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:27.644222   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:27.644254   49071 cri.go:89] found id: ""
	I1024 20:17:27.644265   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:27.644321   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.654982   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:27.655048   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:27.697751   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:27.697773   49071 cri.go:89] found id: ""
	I1024 20:17:27.697783   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:27.697838   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.701909   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:27.701969   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:27.746060   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:27.746085   49071 cri.go:89] found id: ""
	I1024 20:17:27.746094   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:27.746147   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.750335   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:27.750392   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:27.791948   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:27.791973   49071 cri.go:89] found id: ""
	I1024 20:17:27.791981   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:27.792045   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.796535   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:27.796616   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:27.839648   49071 cri.go:89] found id: ""
	I1024 20:17:27.839675   49071 logs.go:284] 0 containers: []
	W1024 20:17:27.839683   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:27.839689   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:27.839750   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:27.889284   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.889327   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:27.889334   49071 cri.go:89] found id: ""
	I1024 20:17:27.889343   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:27.889404   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.893661   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.897791   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:27.897819   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.941335   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:27.941369   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:27.954378   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:27.954409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:28.115760   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:28.115792   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:28.171378   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:28.171409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:28.211591   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:28.211620   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:28.247491   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247676   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247811   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247961   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:28.268681   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:28.268717   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:28.099979   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:28.791972   50077 pod_ready.go:81] duration metric: took 4m0.000695315s waiting for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:28.792005   50077 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:28.792032   50077 pod_ready.go:38] duration metric: took 4m1.199949971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:28.792069   50077 kubeadm.go:640] restartCluster took 5m7.653001653s
	W1024 20:17:28.792133   50077 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 20:17:28.792173   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1024 20:17:28.321382   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:28.321413   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:28.364236   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:28.364260   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:28.840985   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:28.841028   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:28.896806   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:28.896846   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:28.948487   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:28.948520   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:28.993469   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:28.993500   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:29.052064   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052102   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:29.052154   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:29.052165   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052174   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052180   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052186   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:29.052191   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052196   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.598790   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.806587354s)
	I1024 20:17:33.598873   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:17:33.614594   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:17:33.625146   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:17:33.635420   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:17:33.635486   50077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 20:17:33.858680   50077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:17:39.053169   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:17:39.069883   49071 api_server.go:72] duration metric: took 4m23.373979574s to wait for apiserver process to appear ...
	I1024 20:17:39.069910   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:17:39.069953   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:39.070015   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:39.116676   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:39.116696   49071 cri.go:89] found id: ""
	I1024 20:17:39.116703   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:39.116752   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.121745   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:39.121814   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:39.174897   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.174932   49071 cri.go:89] found id: ""
	I1024 20:17:39.174943   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:39.175002   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.180933   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:39.181003   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:39.239666   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.239691   49071 cri.go:89] found id: ""
	I1024 20:17:39.239701   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:39.239754   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.244270   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:39.244328   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:39.285405   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.285432   49071 cri.go:89] found id: ""
	I1024 20:17:39.285443   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:39.285503   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.290326   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:39.290393   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:39.330723   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:39.330751   49071 cri.go:89] found id: ""
	I1024 20:17:39.330761   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:39.330816   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.335850   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:39.335917   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:39.375354   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:39.375377   49071 cri.go:89] found id: ""
	I1024 20:17:39.375387   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:39.375449   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.380243   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:39.380313   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:39.424841   49071 cri.go:89] found id: ""
	I1024 20:17:39.424875   49071 logs.go:284] 0 containers: []
	W1024 20:17:39.424885   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:39.424892   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:39.424950   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:39.464134   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.464153   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:39.464160   49071 cri.go:89] found id: ""
	I1024 20:17:39.464168   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:39.464224   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.468810   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.473093   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:39.473128   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:39.507113   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507292   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507432   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507588   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:39.530433   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:39.530479   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:39.666739   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:39.666765   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.710505   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:39.710538   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.749917   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:39.749946   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.799168   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:39.799196   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.846346   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:39.846377   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:40.273032   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:40.273065   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:40.320491   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:40.320521   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:40.378356   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:40.378395   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:40.421618   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:40.421647   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:40.466303   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:40.466334   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:40.478941   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:40.478966   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:40.544618   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544642   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:40.544694   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:40.544706   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544718   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544725   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544733   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:40.544739   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544747   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:46.481686   50077 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1024 20:17:46.481762   50077 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 20:17:46.481861   50077 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:17:46.482000   50077 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:17:46.482104   50077 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:17:46.482236   50077 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:17:46.482362   50077 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:17:46.482486   50077 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1024 20:17:46.482538   50077 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:17:46.484150   50077 out.go:204]   - Generating certificates and keys ...
	I1024 20:17:46.484246   50077 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 20:17:46.484315   50077 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 20:17:46.484402   50077 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 20:17:46.484509   50077 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 20:17:46.484603   50077 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 20:17:46.484689   50077 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 20:17:46.484778   50077 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 20:17:46.484870   50077 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 20:17:46.484972   50077 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 20:17:46.485069   50077 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 20:17:46.485123   50077 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 20:17:46.485200   50077 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:17:46.485263   50077 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:17:46.485343   50077 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:17:46.485430   50077 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:17:46.485503   50077 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:17:46.485590   50077 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:17:46.487065   50077 out.go:204]   - Booting up control plane ...
	I1024 20:17:46.487158   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:17:46.487219   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:17:46.487291   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:17:46.487401   50077 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:17:46.487551   50077 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:17:46.487623   50077 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003664 seconds
	I1024 20:17:46.487756   50077 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:17:46.487882   50077 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:17:46.487940   50077 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:17:46.488123   50077 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-467375 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 20:17:46.488199   50077 kubeadm.go:322] [bootstrap-token] Using token: axp9sy.xsem3c8nzt72b18p
	I1024 20:17:46.490507   50077 out.go:204]   - Configuring RBAC rules ...
	I1024 20:17:46.490603   50077 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:17:46.490719   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:17:46.490832   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:17:46.490938   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:17:46.491009   50077 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:17:46.491044   50077 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 20:17:46.491083   50077 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 20:17:46.491091   50077 kubeadm.go:322] 
	I1024 20:17:46.491151   50077 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 20:17:46.491163   50077 kubeadm.go:322] 
	I1024 20:17:46.491224   50077 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 20:17:46.491231   50077 kubeadm.go:322] 
	I1024 20:17:46.491260   50077 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 20:17:46.491346   50077 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:17:46.491409   50077 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:17:46.491419   50077 kubeadm.go:322] 
	I1024 20:17:46.491511   50077 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 20:17:46.491621   50077 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:17:46.491715   50077 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:17:46.491725   50077 kubeadm.go:322] 
	I1024 20:17:46.491829   50077 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1024 20:17:46.491929   50077 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 20:17:46.491937   50077 kubeadm.go:322] 
	I1024 20:17:46.492064   50077 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492249   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 20:17:46.492292   50077 kubeadm.go:322]     --control-plane 	  
	I1024 20:17:46.492302   50077 kubeadm.go:322] 
	I1024 20:17:46.492423   50077 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:17:46.492435   50077 kubeadm.go:322] 
	I1024 20:17:46.492532   50077 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492675   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 20:17:46.492686   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:17:46.492694   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:17:46.494152   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:17:46.495677   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:17:46.510639   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:17:46.539872   50077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:17:46.539933   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.539945   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=old-k8s-version-467375 minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.984338   50077 ops.go:34] apiserver oom_adj: -16
	I1024 20:17:46.984391   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.163022   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.798557   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.298499   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.798506   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.298076   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.798120   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.298504   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.298777   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.798477   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.298309   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.798243   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.546645   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:17:50.552245   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:17:50.553721   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:17:50.553747   49071 api_server.go:131] duration metric: took 11.483829454s to wait for apiserver health ...
	I1024 20:17:50.553757   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:17:50.553784   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:50.553844   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:50.594504   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:50.594528   49071 cri.go:89] found id: ""
	I1024 20:17:50.594536   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:50.594586   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.598912   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:50.598963   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:50.644339   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:50.644355   49071 cri.go:89] found id: ""
	I1024 20:17:50.644362   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:50.644406   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.649046   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:50.649099   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:50.688245   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:50.688268   49071 cri.go:89] found id: ""
	I1024 20:17:50.688278   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:50.688330   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.692382   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:50.692429   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:50.736359   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:50.736384   49071 cri.go:89] found id: ""
	I1024 20:17:50.736393   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:50.736451   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.741226   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:50.741287   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:50.797894   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:50.797920   49071 cri.go:89] found id: ""
	I1024 20:17:50.797930   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:50.797997   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.802725   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:50.802781   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:50.851081   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:50.851106   49071 cri.go:89] found id: ""
	I1024 20:17:50.851115   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:50.851166   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.855549   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:50.855600   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:50.909237   49071 cri.go:89] found id: ""
	I1024 20:17:50.909265   49071 logs.go:284] 0 containers: []
	W1024 20:17:50.909276   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:50.909283   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:50.909355   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:50.958541   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:50.958567   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:50.958574   49071 cri.go:89] found id: ""
	I1024 20:17:50.958583   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:50.958638   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.962947   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.967261   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:50.967283   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:51.087158   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:51.087190   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:51.144421   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:51.144458   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:51.200040   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:51.200072   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:51.255703   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:51.255740   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:51.683831   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:51.683869   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:51.726821   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:51.726856   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:51.776977   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:51.777006   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:51.822826   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:51.822861   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:51.873557   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.873838   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874063   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874313   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:51.900648   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:51.900690   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:51.916123   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:51.916161   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:51.960440   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:51.960470   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:52.010020   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:52.010051   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:52.051039   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051063   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:52.051113   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:52.051127   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051142   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051162   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051173   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:52.051183   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051190   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:53.298168   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:53.798546   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.298175   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.798534   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.298510   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.798562   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.297914   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.797930   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.298527   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.298630   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.798550   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.298526   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.798537   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.298538   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.798072   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:01.014502   50077 kubeadm.go:1081] duration metric: took 14.474620601s to wait for elevateKubeSystemPrivileges.
	I1024 20:18:01.014547   50077 kubeadm.go:406] StartCluster complete in 5m39.9402605s
	I1024 20:18:01.014569   50077 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.014667   50077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:18:01.017210   50077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.017539   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:18:01.017574   50077 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:18:01.017659   50077 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017666   50077 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017677   50077 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-467375"
	W1024 20:18:01.017690   50077 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:18:01.017695   50077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-467375"
	I1024 20:18:01.017699   50077 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017718   50077 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-467375"
	W1024 20:18:01.017727   50077 addons.go:240] addon metrics-server should already be in state true
	I1024 20:18:01.017731   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017777   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017816   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:18:01.018053   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018088   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018111   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018122   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018149   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018257   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.036179   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1024 20:18:01.036834   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.037477   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.037504   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.037665   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I1024 20:18:01.037824   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I1024 20:18:01.037912   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.038074   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.038220   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038306   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038850   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.038867   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039010   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.039021   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039391   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039410   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039925   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.039949   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.039974   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.040014   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.041243   50077 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-467375"
	W1024 20:18:01.041258   50077 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:18:01.041277   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.041611   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.041645   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.056254   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1024 20:18:01.056888   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.057215   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I1024 20:18:01.057487   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.057502   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.057895   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.057956   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.058536   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.058574   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.058844   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.058857   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.058929   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I1024 20:18:01.059172   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.059288   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.059451   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.059964   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.059975   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.060353   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.060565   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.061107   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.062802   50077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:18:01.064189   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:18:01.064209   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:18:01.064230   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.062154   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.066082   50077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:18:01.067046   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.067880   50077 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.067901   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:18:01.067921   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.068400   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.068432   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.069073   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.069343   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.069484   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.069587   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.071678   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072196   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.072220   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072596   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.072776   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.072905   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.073043   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.079576   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I1024 20:18:01.080025   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.080592   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.080613   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.081035   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.081240   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.083090   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.083404   50077 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.083425   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:18:01.083443   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.086433   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.086802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.086824   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.087003   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.087198   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.087348   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.087506   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.197205   50077 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-467375" context rescaled to 1 replicas
	I1024 20:18:01.197249   50077 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:18:01.200328   50077 out.go:177] * Verifying Kubernetes components...
	I1024 20:18:02.061971   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:18:02.062015   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.062024   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.062031   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.062040   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.062047   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.062054   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.062066   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.062078   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.062086   49071 system_pods.go:74] duration metric: took 11.508322005s to wait for pod list to return data ...
	I1024 20:18:02.062098   49071 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:02.065560   49071 default_sa.go:45] found service account: "default"
	I1024 20:18:02.065585   49071 default_sa.go:55] duration metric: took 3.476366ms for default service account to be created ...
	I1024 20:18:02.065595   49071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:02.073224   49071 system_pods.go:86] 8 kube-system pods found
	I1024 20:18:02.073253   49071 system_pods.go:89] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.073262   49071 system_pods.go:89] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.073269   49071 system_pods.go:89] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.073277   49071 system_pods.go:89] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.073284   49071 system_pods.go:89] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.073290   49071 system_pods.go:89] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.073313   49071 system_pods.go:89] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.073326   49071 system_pods.go:89] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.073335   49071 system_pods.go:126] duration metric: took 7.733883ms to wait for k8s-apps to be running ...
	I1024 20:18:02.073346   49071 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:18:02.073405   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:02.093085   49071 system_svc.go:56] duration metric: took 19.727658ms WaitForService to wait for kubelet.
	I1024 20:18:02.093113   49071 kubeadm.go:581] duration metric: took 4m46.397215509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:18:02.093135   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:18:02.101982   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:18:02.102007   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:18:02.102018   49071 node_conditions.go:105] duration metric: took 8.878046ms to run NodePressure ...
	I1024 20:18:02.102035   49071 start.go:228] waiting for startup goroutines ...
	I1024 20:18:02.102041   49071 start.go:233] waiting for cluster config update ...
	I1024 20:18:02.102054   49071 start.go:242] writing updated cluster config ...
	I1024 20:18:02.102767   49071 ssh_runner.go:195] Run: rm -f paused
	I1024 20:18:02.159693   49071 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:18:02.161831   49071 out.go:177] * Done! kubectl is now configured to use "no-preload-014826" cluster and "default" namespace by default
	I1024 20:18:01.201778   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:01.315241   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.335753   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.339160   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:18:01.339182   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:18:01.376704   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:18:01.376726   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:18:01.385150   50077 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.385223   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 20:18:01.443957   50077 node_ready.go:49] node "old-k8s-version-467375" has status "Ready":"True"
	I1024 20:18:01.443978   50077 node_ready.go:38] duration metric: took 58.799937ms waiting for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.443987   50077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:01.453968   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:01.453998   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:18:01.481599   50077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:01.543065   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:02.715998   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.400725332s)
	I1024 20:18:02.716049   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716062   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716066   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.38027937s)
	I1024 20:18:02.716103   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716120   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716152   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.330913087s)
	I1024 20:18:02.716170   50077 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 20:18:02.716377   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716392   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716402   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716410   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716512   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.716522   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716536   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716547   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716557   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716623   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716637   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.717532   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.717534   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.717554   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.790444   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.790480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.790901   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.790925   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895176   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.352065096s)
	I1024 20:18:02.895243   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895268   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895611   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895630   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895634   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.895639   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895875   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895888   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895905   50077 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-467375"
	I1024 20:18:02.897664   50077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:18:02.899508   50077 addons.go:502] enable addons completed in 1.881940564s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:18:03.719917   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:06.207388   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:08.207967   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:10.708258   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:12.208133   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.208155   50077 pod_ready.go:81] duration metric: took 10.726531733s waiting for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.208166   50077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213213   50077 pod_ready.go:92] pod "kube-proxy-9bpht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.213237   50077 pod_ready.go:81] duration metric: took 5.063943ms waiting for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213247   50077 pod_ready.go:38] duration metric: took 10.769249135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:12.213267   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:18:12.213344   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:18:12.228272   50077 api_server.go:72] duration metric: took 11.030986098s to wait for apiserver process to appear ...
	I1024 20:18:12.228295   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:18:12.228313   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:18:12.234663   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:18:12.235584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:18:12.235599   50077 api_server.go:131] duration metric: took 7.297294ms to wait for apiserver health ...
	I1024 20:18:12.235605   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:18:12.239203   50077 system_pods.go:59] 4 kube-system pods found
	I1024 20:18:12.239228   50077 system_pods.go:61] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.239235   50077 system_pods.go:61] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.239246   50077 system_pods.go:61] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.239292   50077 system_pods.go:61] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.239307   50077 system_pods.go:74] duration metric: took 3.696523ms to wait for pod list to return data ...
	I1024 20:18:12.239315   50077 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:12.242065   50077 default_sa.go:45] found service account: "default"
	I1024 20:18:12.242080   50077 default_sa.go:55] duration metric: took 2.760528ms for default service account to be created ...
	I1024 20:18:12.242086   50077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:12.245602   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.245624   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.245631   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.245640   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.245648   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.245664   50077 retry.go:31] will retry after 287.935783ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.538837   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.538900   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.538924   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.538942   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.538955   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.538979   50077 retry.go:31] will retry after 320.680304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.864800   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.864826   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.864832   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.864838   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.864844   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.864858   50077 retry.go:31] will retry after 364.04425ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.233903   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.233927   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.233934   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.233941   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.233946   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.233974   50077 retry.go:31] will retry after 559.821457ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.799208   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.799234   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.799240   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.799246   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.799252   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.799266   50077 retry.go:31] will retry after 522.263157ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.325735   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.325767   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.325776   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.325789   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.325799   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.325817   50077 retry.go:31] will retry after 668.137602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.999589   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.999614   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.999620   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.999626   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.999632   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.999646   50077 retry.go:31] will retry after 859.983274ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:15.865531   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:15.865556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:15.865561   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:15.865568   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:15.865573   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:15.865589   50077 retry.go:31] will retry after 1.238765858s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:17.109999   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:17.110023   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:17.110028   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:17.110035   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:17.110041   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:17.110054   50077 retry.go:31] will retry after 1.485428629s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:18.600612   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:18.600635   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:18.600641   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:18.600647   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:18.600652   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:18.600665   50077 retry.go:31] will retry after 2.290652681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:20.897529   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:20.897556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:20.897562   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:20.897571   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:20.897577   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:20.897593   50077 retry.go:31] will retry after 2.367552906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:23.270766   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:23.270792   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:23.270800   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:23.270810   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:23.270817   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:23.270834   50077 retry.go:31] will retry after 2.861357376s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:26.136663   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:26.136696   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:26.136704   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:26.136715   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:26.136725   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:26.136743   50077 retry.go:31] will retry after 3.526737387s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:29.670148   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:29.670175   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:29.670181   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:29.670188   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:29.670195   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:29.670215   50077 retry.go:31] will retry after 5.450931485s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:35.125964   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:35.125989   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:35.125994   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:35.126001   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:35.126007   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:35.126022   50077 retry.go:31] will retry after 5.914408322s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:41.046649   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:41.046670   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:41.046677   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:41.046684   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:41.046690   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:41.046704   50077 retry.go:31] will retry after 6.748980526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:47.802189   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:47.802212   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:47.802217   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:47.802225   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:47.802230   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:47.802244   50077 retry.go:31] will retry after 8.662562452s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:56.471025   50077 system_pods.go:86] 7 kube-system pods found
	I1024 20:18:56.471062   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:56.471071   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:18:56.471079   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:18:56.471086   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:56.471094   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Pending
	I1024 20:18:56.471108   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:56.471121   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:56.471142   50077 retry.go:31] will retry after 10.356793998s: missing components: etcd, kube-scheduler
	I1024 20:19:06.834711   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:06.834741   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:06.834749   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Pending
	I1024 20:19:06.834755   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:06.834762   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:06.834767   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:06.834772   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:06.834782   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:06.834792   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:06.834809   50077 retry.go:31] will retry after 14.609583217s: missing components: etcd
	I1024 20:19:21.450651   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:21.450678   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:21.450685   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Running
	I1024 20:19:21.450689   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:21.450693   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:21.450699   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:21.450709   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:21.450719   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:21.450732   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:21.450745   50077 system_pods.go:126] duration metric: took 1m9.20865321s to wait for k8s-apps to be running ...
	I1024 20:19:21.450757   50077 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:19:21.450800   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:19:21.468030   50077 system_svc.go:56] duration metric: took 17.254248ms WaitForService to wait for kubelet.
	I1024 20:19:21.468061   50077 kubeadm.go:581] duration metric: took 1m20.270780436s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:19:21.468089   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:19:21.471958   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:19:21.471982   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:19:21.471993   50077 node_conditions.go:105] duration metric: took 3.898893ms to run NodePressure ...
	I1024 20:19:21.472003   50077 start.go:228] waiting for startup goroutines ...
	I1024 20:19:21.472008   50077 start.go:233] waiting for cluster config update ...
	I1024 20:19:21.472018   50077 start.go:242] writing updated cluster config ...
	I1024 20:19:21.472257   50077 ssh_runner.go:195] Run: rm -f paused
	I1024 20:19:21.520082   50077 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1024 20:19:21.522545   50077 out.go:177] 
	W1024 20:19:21.524125   50077 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1024 20:19:21.525515   50077 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1024 20:19:21.527113   50077 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-467375" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:11:17 UTC, ends at Tue 2023-10-24 20:25:18 UTC. --
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.625114511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179118625099797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b5722f76-ef64-406a-b4c2-ab4002594c60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.625834914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1faabbb3-4b6b-4326-9482-97ae88fa8d39 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.625880904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1faabbb3-4b6b-4326-9482-97ae88fa8d39 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.626087945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1faabbb3-4b6b-4326-9482-97ae88fa8d39 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.668075611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3c05e003-10f6-49f7-86a8-e2dc6763db78 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.668129943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3c05e003-10f6-49f7-86a8-e2dc6763db78 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.669689707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=93d8d564-c894-4eb4-9b03-08b0e5e527a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.670067737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179118670051751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=93d8d564-c894-4eb4-9b03-08b0e5e527a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.670591575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1de79e78-90cb-49a3-b808-5e6908d0f290 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.670674855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1de79e78-90cb-49a3-b808-5e6908d0f290 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.670864011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1de79e78-90cb-49a3-b808-5e6908d0f290 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.711101751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cb17d22e-079a-47cc-8346-f8339bedb604 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.711207669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cb17d22e-079a-47cc-8346-f8339bedb604 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.712593112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=32217412-cc8e-4395-8ba8-9f199c2a837a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.712969042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179118712956391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=32217412-cc8e-4395-8ba8-9f199c2a837a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.713660258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7c3e9ab-0d33-441c-9b28-f25212b70b18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.713736699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7c3e9ab-0d33-441c-9b28-f25212b70b18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.713966202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7c3e9ab-0d33-441c-9b28-f25212b70b18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.751144147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e44c5b4c-c38b-4d92-9191-4e8d228e4a18 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.751209775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e44c5b4c-c38b-4d92-9191-4e8d228e4a18 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.752699540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=52f85372-705b-4f21-ad51-0fdf0e7f68ee name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.753083467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179118753070974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=52f85372-705b-4f21-ad51-0fdf0e7f68ee name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.754035725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3d81e52d-2b18-4e9b-9d7a-56c93fe5ad38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.754105896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3d81e52d-2b18-4e9b-9d7a-56c93fe5ad38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:18 embed-certs-867165 crio[711]: time="2023-10-24 20:25:18.754302408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3d81e52d-2b18-4e9b-9d7a-56c93fe5ad38 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26f391c93fe16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   2db5306e556fe       storage-provisioner
	9c7033aab4c21       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   25869d82b77f0       busybox
	9e2b63eae7db7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   f54e65b725cb6       coredns-5dd5756b68-6qq4r
	a9906107f32c1       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      13 minutes ago      Running             kube-proxy                1                   90f778b2d55f6       kube-proxy-thkqr
	2b61033b8afd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   2db5306e556fe       storage-provisioner
	d23e68e4d4a23       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      13 minutes ago      Running             kube-scheduler            1                   0e811808018d5       kube-scheduler-embed-certs-867165
	82b51425efb50       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   330793c8976de       etcd-embed-certs-867165
	7217044d2e039       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      13 minutes ago      Running             kube-apiserver            1                   744cbeaf8172d       kube-apiserver-embed-certs-867165
	e159067fdfc42       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      13 minutes ago      Running             kube-controller-manager   1                   b04361eae7246       kube-controller-manager-embed-certs-867165
	
	* 
	* ==> coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57575 - 56234 "HINFO IN 4712219434935555436.3172398071474408327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013447647s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-867165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-867165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=embed-certs-867165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_02_59_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:02:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-867165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:22:35 +0000   Tue, 24 Oct 2023 20:02:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:22:35 +0000   Tue, 24 Oct 2023 20:02:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:22:35 +0000   Tue, 24 Oct 2023 20:02:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:22:35 +0000   Tue, 24 Oct 2023 20:12:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.10
	  Hostname:    embed-certs-867165
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 602ce82d6b5a46b4bc42fbc229933dff
	  System UUID:                602ce82d-6b5a-46b4-bc42-fbc229933dff
	  Boot ID:                    d24d6ea4-501f-4b37-a172-fe947a75312c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-6qq4r                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-867165                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-867165             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-867165    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-thkqr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-867165             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-pv9ww               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-867165 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-867165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-867165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-867165 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node embed-certs-867165 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-867165 event: Registered Node embed-certs-867165 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-867165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-867165 event: Registered Node embed-certs-867165 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.327496] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.565243] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151969] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.440297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.446921] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.097748] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.138995] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.122858] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.255571] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.052062] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[Oct24 20:12] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] <==
	* {"level":"info","ts":"2023-10-24T20:11:54.434338Z","caller":"traceutil/trace.go:171","msg":"trace[1993328144] range","detail":"{range_begin:/registry/events/default/busybox.1791242d451c4ead; range_end:; response_count:1; response_revision:562; }","duration":"988.459355ms","start":"2023-10-24T20:11:53.445871Z","end":"2023-10-24T20:11:54.434331Z","steps":["trace[1993328144] 'agreement among raft nodes before linearized reading'  (duration: 988.283214ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:11:54.43438Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:53.44586Z","time spent":"988.512022ms","remote":"127.0.0.1:56236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":785,"request content":"key:\"/registry/events/default/busybox.1791242d451c4ead\" "}
	{"level":"warn","ts":"2023-10-24T20:11:54.434763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"984.938686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-10-24T20:11:54.434815Z","caller":"traceutil/trace.go:171","msg":"trace[607005491] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:562; }","duration":"984.993967ms","start":"2023-10-24T20:11:53.449814Z","end":"2023-10-24T20:11:54.434808Z","steps":["trace[607005491] 'agreement among raft nodes before linearized reading'  (duration: 984.914579ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:11:54.434836Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:53.449804Z","time spent":"985.026855ms","remote":"127.0.0.1:56264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"info","ts":"2023-10-24T20:11:54.711883Z","caller":"traceutil/trace.go:171","msg":"trace[331690051] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"270.662001ms","start":"2023-10-24T20:11:54.441201Z","end":"2023-10-24T20:11:54.711863Z","steps":["trace[331690051] 'process raft request'  (duration: 270.457886ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:54.712925Z","caller":"traceutil/trace.go:171","msg":"trace[496078760] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:597; }","duration":"264.238262ms","start":"2023-10-24T20:11:54.448674Z","end":"2023-10-24T20:11:54.712912Z","steps":["trace[496078760] 'read index received'  (duration: 264.233681ms)","trace[496078760] 'applied index is now lower than readState.Index'  (duration: 3.328µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:54.71316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.770491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-10-24T20:11:54.713221Z","caller":"traceutil/trace.go:171","msg":"trace[1026459042] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:563; }","duration":"264.843318ms","start":"2023-10-24T20:11:54.448368Z","end":"2023-10-24T20:11:54.713211Z","steps":["trace[1026459042] 'agreement among raft nodes before linearized reading'  (duration: 264.734782ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:54.922297Z","caller":"traceutil/trace.go:171","msg":"trace[835528844] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"186.27579ms","start":"2023-10-24T20:11:54.736007Z","end":"2023-10-24T20:11:54.922283Z","steps":["trace[835528844] 'process raft request'  (duration: 186.241909ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:54.922696Z","caller":"traceutil/trace.go:171","msg":"trace[1282737226] transaction","detail":"{read_only:false; response_revision:564; number_of_response:1; }","duration":"469.022718ms","start":"2023-10-24T20:11:54.453662Z","end":"2023-10-24T20:11:54.922684Z","steps":["trace[1282737226] 'process raft request'  (duration: 448.194095ms)","trace[1282737226] 'compare'  (duration: 20.321745ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:54.922875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:54.453648Z","time spent":"469.074602ms","remote":"127.0.0.1:56260","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3544,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:513 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3490 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2023-10-24T20:11:54.922956Z","caller":"traceutil/trace.go:171","msg":"trace[1017741281] linearizableReadLoop","detail":"{readStateIndex:598; appliedIndex:597; }","duration":"209.866139ms","start":"2023-10-24T20:11:54.713084Z","end":"2023-10-24T20:11:54.922951Z","steps":["trace[1017741281] 'read index received'  (duration: 188.776089ms)","trace[1017741281] 'applied index is now lower than readState.Index'  (duration: 21.089226ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:54.923122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"470.898462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-10-24T20:11:54.923185Z","caller":"traceutil/trace.go:171","msg":"trace[1921888367] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:565; }","duration":"470.962695ms","start":"2023-10-24T20:11:54.452215Z","end":"2023-10-24T20:11:54.923178Z","steps":["trace[1921888367] 'agreement among raft nodes before linearized reading'  (duration: 470.878777ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:11:54.923212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:54.452206Z","time spent":"470.996961ms","remote":"127.0.0.1:56264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":232,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"warn","ts":"2023-10-24T20:11:54.923313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.444513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.1791242d3e84c3ab\" ","response":"range_response_count:1 size:880"}
	{"level":"info","ts":"2023-10-24T20:11:54.923327Z","caller":"traceutil/trace.go:171","msg":"trace[778577011] range","detail":"{range_begin:/registry/events/default/busybox.1791242d3e84c3ab; range_end:; response_count:1; response_revision:565; }","duration":"197.459599ms","start":"2023-10-24T20:11:54.725864Z","end":"2023-10-24T20:11:54.923323Z","steps":["trace[778577011] 'agreement among raft nodes before linearized reading'  (duration: 197.428045ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:57.293362Z","caller":"traceutil/trace.go:171","msg":"trace[2030366714] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:625; }","duration":"125.583884ms","start":"2023-10-24T20:11:57.167763Z","end":"2023-10-24T20:11:57.293347Z","steps":["trace[2030366714] 'read index received'  (duration: 125.234487ms)","trace[2030366714] 'applied index is now lower than readState.Index'  (duration: 348.777µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:57.293721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.955308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-867165\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2023-10-24T20:11:57.294066Z","caller":"traceutil/trace.go:171","msg":"trace[1451677961] range","detail":"{range_begin:/registry/minions/embed-certs-867165; range_end:; response_count:1; response_revision:585; }","duration":"126.312347ms","start":"2023-10-24T20:11:57.167743Z","end":"2023-10-24T20:11:57.294055Z","steps":["trace[1451677961] 'agreement among raft nodes before linearized reading'  (duration: 125.735556ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:57.295078Z","caller":"traceutil/trace.go:171","msg":"trace[1284424414] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"128.211691ms","start":"2023-10-24T20:11:57.166857Z","end":"2023-10-24T20:11:57.295069Z","steps":["trace[1284424414] 'process raft request'  (duration: 126.256415ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:21:49.193866Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":855}
	{"level":"info","ts":"2023-10-24T20:21:49.203013Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":855,"took":"8.331968ms","hash":26409532}
	{"level":"info","ts":"2023-10-24T20:21:49.203084Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":26409532,"revision":855,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:25:19 up 14 min,  0 users,  load average: 0.33, 0.23, 0.15
	Linux embed-certs-867165 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] <==
	* I1024 20:21:50.927362       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:21:51.928026       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:21:51.928083       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:21:51.928091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:21:51.928155       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:21:51.928231       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:21:51.929474       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:22:50.811856       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:22:51.928641       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:22:51.928834       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:22:51.928869       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:22:51.929654       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:22:51.929741       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:22:51.930917       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:23:50.811101       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 20:24:50.811294       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:24:51.930149       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:24:51.930300       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:24:51.930327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:24:51.931629       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:24:51.931746       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:24:51.931772       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] <==
	* I1024 20:19:36.128286       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:20:05.661216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:20:06.138065       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:20:35.667266       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:20:36.147461       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:21:05.673855       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:21:06.156373       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:21:35.679249       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:21:36.167029       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:22:05.685462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:22:06.175632       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:22:35.693378       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:22:36.183603       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:23:05.699932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:23:06.192741       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:23:08.282130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="371.282µs"
	I1024 20:23:21.279085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="187.387µs"
	E1024 20:23:35.706267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:23:36.202350       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:24:05.712434       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:24:06.211729       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:24:35.718419       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:24:36.222601       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:25:05.726227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:25:06.233590       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] <==
	* I1024 20:11:55.096724       1 server_others.go:69] "Using iptables proxy"
	I1024 20:11:55.107840       1 node.go:141] Successfully retrieved node IP: 192.168.72.10
	I1024 20:11:55.161629       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:11:55.161684       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:11:55.164723       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:11:55.164801       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:11:55.164988       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:11:55.165041       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:11:55.166111       1 config.go:188] "Starting service config controller"
	I1024 20:11:55.166195       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:11:55.166225       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:11:55.166231       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:11:55.170407       1 config.go:315] "Starting node config controller"
	I1024 20:11:55.170450       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:11:55.266339       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:11:55.266486       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:11:55.274774       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] <==
	* I1024 20:11:48.465150       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:11:50.864932       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:11:50.865102       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:11:50.865133       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:11:50.865157       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:11:50.903703       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:11:50.903794       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:11:50.907773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:11:50.907926       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:11:50.912941       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:11:50.913024       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:11:51.009657       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:11:17 UTC, ends at Tue 2023-10-24 20:25:19 UTC. --
	Oct 24 20:22:44 embed-certs-867165 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:22:44 embed-certs-867165 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:22:57 embed-certs-867165 kubelet[917]: E1024 20:22:57.282119     917 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:22:57 embed-certs-867165 kubelet[917]: E1024 20:22:57.282194     917 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:22:57 embed-certs-867165 kubelet[917]: E1024 20:22:57.282408     917 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l7zm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-pv9ww_kube-system(6a642ef8-3b64-4cf1-b905-a3c7f510f29f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:22:57 embed-certs-867165 kubelet[917]: E1024 20:22:57.282464     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:23:08 embed-certs-867165 kubelet[917]: E1024 20:23:08.263721     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:23:21 embed-certs-867165 kubelet[917]: E1024 20:23:21.262116     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:23:34 embed-certs-867165 kubelet[917]: E1024 20:23:34.267077     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:23:44 embed-certs-867165 kubelet[917]: E1024 20:23:44.280633     917 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:23:44 embed-certs-867165 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:23:44 embed-certs-867165 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:23:44 embed-certs-867165 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:23:46 embed-certs-867165 kubelet[917]: E1024 20:23:46.263168     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:23:59 embed-certs-867165 kubelet[917]: E1024 20:23:59.263147     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:24:12 embed-certs-867165 kubelet[917]: E1024 20:24:12.263351     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:24:24 embed-certs-867165 kubelet[917]: E1024 20:24:24.263110     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:24:36 embed-certs-867165 kubelet[917]: E1024 20:24:36.265928     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:24:44 embed-certs-867165 kubelet[917]: E1024 20:24:44.284577     917 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:24:44 embed-certs-867165 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:24:44 embed-certs-867165 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:24:44 embed-certs-867165 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:24:48 embed-certs-867165 kubelet[917]: E1024 20:24:48.262080     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:25:02 embed-certs-867165 kubelet[917]: E1024 20:25:02.264056     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:25:14 embed-certs-867165 kubelet[917]: E1024 20:25:14.263485     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	
	* 
	* ==> storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] <==
	* I1024 20:12:24.660342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:12:24.679415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:12:24.680556       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:12:42.102644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:12:42.102879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-867165_989b48c3-31de-413c-b8a0-62d1bb8e7055!
	I1024 20:12:42.103326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c01928cf-4170-49fd-8f37-2d3fc3f03c41", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-867165_989b48c3-31de-413c-b8a0-62d1bb8e7055 became leader
	I1024 20:12:42.204844       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-867165_989b48c3-31de-413c-b8a0-62d1bb8e7055!
	
	* 
	* ==> storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] <==
	* I1024 20:11:53.468434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 20:12:23.471358       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-867165 -n embed-certs-867165
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-867165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pv9ww
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-867165 describe pod metrics-server-57f55c9bc5-pv9ww
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-867165 describe pod metrics-server-57f55c9bc5-pv9ww: exit status 1 (76.996361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pv9ww" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-867165 describe pod metrics-server-57f55c9bc5-pv9ww: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:25:39.205740709 +0000 UTC m=+5100.971219705
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643126 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-643126 logs -n 25: (1.689887486s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-636215                                        | pause-636215                 | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-145190                              | stopped-upgrade-145190       | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:09:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:09:32.850310   50077 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:09:32.850450   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850462   50077 out.go:309] Setting ErrFile to fd 2...
	I1024 20:09:32.850470   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850632   50077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:09:32.851152   50077 out.go:303] Setting JSON to false
	I1024 20:09:32.851985   50077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6471,"bootTime":1698171702,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:09:32.852046   50077 start.go:138] virtualization: kvm guest
	I1024 20:09:32.854420   50077 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:09:32.855945   50077 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:09:32.855955   50077 notify.go:220] Checking for updates...
	I1024 20:09:32.857502   50077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:09:32.858984   50077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:09:32.860444   50077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:09:32.861833   50077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:09:32.863229   50077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:09:32.864917   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:09:32.865284   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.865345   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.879470   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1024 20:09:32.879865   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.880332   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.880355   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.880731   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.880894   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.882647   50077 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:09:32.884050   50077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:09:32.884316   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.884351   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.897671   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I1024 20:09:32.898054   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.898495   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.898521   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.898837   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.899002   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.933365   50077 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:09:32.934993   50077 start.go:298] selected driver: kvm2
	I1024 20:09:32.935008   50077 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.935100   50077 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:09:32.935713   50077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.935789   50077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:09:32.949274   50077 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:09:32.949613   50077 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:09:32.949670   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:09:32.949682   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:09:32.949693   50077 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.949823   50077 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.951734   50077 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:09:31.289529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:32.953102   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:09:32.953131   50077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:09:32.953140   50077 cache.go:57] Caching tarball of preloaded images
	I1024 20:09:32.953220   50077 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:09:32.953230   50077 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:09:32.953361   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:09:32.953531   50077 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:09:37.369555   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:40.441571   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:46.521544   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:49.593529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:55.673497   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:58.745605   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:04.825563   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:07.897530   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:13.977541   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:17.049658   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:23.129561   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:26.201528   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:32.281583   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:35.353592   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:41.433570   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:44.505586   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:50.585514   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:53.657506   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:59.737620   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:02.809631   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:05.812536   49198 start.go:369] acquired machines lock for "embed-certs-867165" in 4m26.940203259s
	I1024 20:11:05.812584   49198 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:05.812594   49198 fix.go:54] fixHost starting: 
	I1024 20:11:05.812911   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:05.812959   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:05.827853   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I1024 20:11:05.828400   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:05.828896   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:05.828922   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:05.829237   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:05.829432   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:05.829588   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:05.831229   49198 fix.go:102] recreateIfNeeded on embed-certs-867165: state=Stopped err=<nil>
	I1024 20:11:05.831249   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	W1024 20:11:05.831407   49198 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:05.833007   49198 out.go:177] * Restarting existing kvm2 VM for "embed-certs-867165" ...
	I1024 20:11:05.810496   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:05.810546   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:11:05.812388   49071 machine.go:91] provisioned docker machine in 4m37.419019216s
	I1024 20:11:05.812422   49071 fix.go:56] fixHost completed within 4m37.4383256s
	I1024 20:11:05.812427   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 4m37.438344867s
	W1024 20:11:05.812453   49071 start.go:691] error starting host: provision: host is not running
	W1024 20:11:05.812538   49071 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1024 20:11:05.812551   49071 start.go:706] Will try again in 5 seconds ...
	I1024 20:11:05.834235   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Start
	I1024 20:11:05.834397   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring networks are active...
	I1024 20:11:05.835212   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network default is active
	I1024 20:11:05.835540   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network mk-embed-certs-867165 is active
	I1024 20:11:05.835850   49198 main.go:141] libmachine: (embed-certs-867165) Getting domain xml...
	I1024 20:11:05.836556   49198 main.go:141] libmachine: (embed-certs-867165) Creating domain...
	I1024 20:11:07.054253   49198 main.go:141] libmachine: (embed-certs-867165) Waiting to get IP...
	I1024 20:11:07.055379   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.055819   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.055911   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.055829   50328 retry.go:31] will retry after 212.147571ms: waiting for machine to come up
	I1024 20:11:07.269505   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.269953   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.270002   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.269942   50328 retry.go:31] will retry after 308.705783ms: waiting for machine to come up
	I1024 20:11:07.580602   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.581075   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.581103   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.581041   50328 retry.go:31] will retry after 467.682838ms: waiting for machine to come up
	I1024 20:11:08.050725   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.051121   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.051154   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.051070   50328 retry.go:31] will retry after 399.648518ms: waiting for machine to come up
	I1024 20:11:08.452605   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.452968   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.452999   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.452906   50328 retry.go:31] will retry after 617.165915ms: waiting for machine to come up
	I1024 20:11:10.812763   49071 start.go:365] acquiring machines lock for no-preload-014826: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:11:09.071803   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.072236   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.072268   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.072205   50328 retry.go:31] will retry after 678.895198ms: waiting for machine to come up
	I1024 20:11:09.753179   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.753658   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.753689   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.753600   50328 retry.go:31] will retry after 807.254598ms: waiting for machine to come up
	I1024 20:11:10.562345   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:10.562733   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:10.562761   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:10.562688   50328 retry.go:31] will retry after 921.950476ms: waiting for machine to come up
	I1024 20:11:11.485981   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:11.486498   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:11.486524   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:11.486452   50328 retry.go:31] will retry after 1.56679652s: waiting for machine to come up
	I1024 20:11:13.055209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:13.055638   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:13.055664   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:13.055594   50328 retry.go:31] will retry after 2.296157501s: waiting for machine to come up
	I1024 20:11:15.355156   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:15.355522   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:15.355555   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:15.355460   50328 retry.go:31] will retry after 1.913484523s: waiting for machine to come up
	I1024 20:11:17.270771   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:17.271200   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:17.271237   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:17.271154   50328 retry.go:31] will retry after 2.867410465s: waiting for machine to come up
	I1024 20:11:20.142209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:20.142651   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:20.142675   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:20.142603   50328 retry.go:31] will retry after 4.193720328s: waiting for machine to come up
	I1024 20:11:25.925856   49708 start.go:369] acquired machines lock for "default-k8s-diff-port-643126" in 3m22.313323811s
	I1024 20:11:25.925904   49708 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:25.925911   49708 fix.go:54] fixHost starting: 
	I1024 20:11:25.926296   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:25.926331   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:25.942871   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I1024 20:11:25.943321   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:25.943866   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:11:25.943890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:25.944187   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:25.944359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:25.944510   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:11:25.945833   49708 fix.go:102] recreateIfNeeded on default-k8s-diff-port-643126: state=Stopped err=<nil>
	I1024 20:11:25.945875   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	W1024 20:11:25.946039   49708 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:25.949057   49708 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-643126" ...
	I1024 20:11:24.340353   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.340876   49198 main.go:141] libmachine: (embed-certs-867165) Found IP for machine: 192.168.72.10
	I1024 20:11:24.340899   49198 main.go:141] libmachine: (embed-certs-867165) Reserving static IP address...
	I1024 20:11:24.340912   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has current primary IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.341389   49198 main.go:141] libmachine: (embed-certs-867165) Reserved static IP address: 192.168.72.10
	I1024 20:11:24.341430   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.341453   49198 main.go:141] libmachine: (embed-certs-867165) Waiting for SSH to be available...
	I1024 20:11:24.341482   49198 main.go:141] libmachine: (embed-certs-867165) DBG | skip adding static IP to network mk-embed-certs-867165 - found existing host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"}
	I1024 20:11:24.341500   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Getting to WaitForSSH function...
	I1024 20:11:24.343707   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344021   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.344050   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344202   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH client type: external
	I1024 20:11:24.344229   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa (-rw-------)
	I1024 20:11:24.344263   49198 main.go:141] libmachine: (embed-certs-867165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:24.344279   49198 main.go:141] libmachine: (embed-certs-867165) DBG | About to run SSH command:
	I1024 20:11:24.344290   49198 main.go:141] libmachine: (embed-certs-867165) DBG | exit 0
	I1024 20:11:24.433113   49198 main.go:141] libmachine: (embed-certs-867165) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:24.433578   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetConfigRaw
	I1024 20:11:24.434267   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.436768   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437149   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.437178   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437479   49198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/config.json ...
	I1024 20:11:24.437738   49198 machine.go:88] provisioning docker machine ...
	I1024 20:11:24.437760   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:24.438014   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438217   49198 buildroot.go:166] provisioning hostname "embed-certs-867165"
	I1024 20:11:24.438245   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438431   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.440509   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440861   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.440884   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440998   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.441155   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441329   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441499   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.441644   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.441990   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.442009   49198 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-867165 && echo "embed-certs-867165" | sudo tee /etc/hostname
	I1024 20:11:24.570417   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-867165
	
	I1024 20:11:24.570456   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.573010   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573421   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.573446   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573634   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.573845   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574000   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574100   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.574296   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.574611   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.574628   49198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-867165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-867165/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-867165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:24.698255   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:24.698281   49198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:24.698298   49198 buildroot.go:174] setting up certificates
	I1024 20:11:24.698306   49198 provision.go:83] configureAuth start
	I1024 20:11:24.698317   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.698624   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.701552   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.701900   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.701954   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.702044   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.704047   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704389   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.704413   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704578   49198 provision.go:138] copyHostCerts
	I1024 20:11:24.704632   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:24.704648   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:24.704713   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:24.704794   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:24.704801   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:24.704828   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:24.704877   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:24.704883   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:24.704901   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:24.704961   49198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.embed-certs-867165 san=[192.168.72.10 192.168.72.10 localhost 127.0.0.1 minikube embed-certs-867165]
	I1024 20:11:25.212018   49198 provision.go:172] copyRemoteCerts
	I1024 20:11:25.212075   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:25.212095   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.214791   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215112   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.215141   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215262   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.215490   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.215682   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.215805   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.301782   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:25.324352   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1024 20:11:25.346349   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:11:25.368012   49198 provision.go:86] duration metric: configureAuth took 669.695412ms
	I1024 20:11:25.368036   49198 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:25.368205   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:25.368269   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.370479   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370739   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.370782   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370873   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.371063   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371395   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371593   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.371760   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.372083   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.372098   49198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:25.685250   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:25.685327   49198 machine.go:91] provisioned docker machine in 1.247541762s
	I1024 20:11:25.685347   49198 start.go:300] post-start starting for "embed-certs-867165" (driver="kvm2")
	I1024 20:11:25.685363   49198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:25.685388   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.685781   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:25.685813   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.688378   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688666   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.688712   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688886   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.689115   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.689274   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.689463   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.775321   49198 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:25.779494   49198 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:25.779516   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:25.779590   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:25.779663   49198 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:25.779748   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:25.788441   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:25.809843   49198 start.go:303] post-start completed in 124.478424ms
	I1024 20:11:25.809946   49198 fix.go:56] fixHost completed within 19.997269664s
	I1024 20:11:25.809985   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.812709   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813101   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.813133   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813265   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.813464   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813650   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.813962   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.814293   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.814309   49198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:25.925691   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178285.873274561
	
	I1024 20:11:25.925721   49198 fix.go:206] guest clock: 1698178285.873274561
	I1024 20:11:25.925731   49198 fix.go:219] Guest: 2023-10-24 20:11:25.873274561 +0000 UTC Remote: 2023-10-24 20:11:25.809967209 +0000 UTC m=+287.089115618 (delta=63.307352ms)
	I1024 20:11:25.925760   49198 fix.go:190] guest clock delta is within tolerance: 63.307352ms
	I1024 20:11:25.925767   49198 start.go:83] releasing machines lock for "embed-certs-867165", held for 20.113201351s
	I1024 20:11:25.925801   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.926046   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:25.928979   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929337   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.929369   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929547   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930171   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930239   49198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:25.930285   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.930332   49198 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:25.930356   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.932685   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.932918   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933167   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933197   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933225   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933254   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933377   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933548   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933600   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933758   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933773   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933941   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.934075   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:26.046804   49198 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:26.052139   49198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:26.195404   49198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:26.201515   49198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:26.201602   49198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:26.215298   49198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:26.215312   49198 start.go:472] detecting cgroup driver to use...
	I1024 20:11:26.215375   49198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:26.228683   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:26.240279   49198 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:26.240328   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:26.252314   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:26.264748   49198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:26.363370   49198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:26.472219   49198 docker.go:214] disabling docker service ...
	I1024 20:11:26.472293   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:26.485325   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:26.497949   49198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:26.614981   49198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:26.731140   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:26.750199   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:26.770158   49198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:26.770224   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.781180   49198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:26.781246   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.791901   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.802435   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.812848   49198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:26.826330   49198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:26.837268   49198 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:26.837350   49198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:26.853637   49198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:26.866347   49198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:26.985185   49198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:27.154650   49198 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:27.154718   49198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:27.160801   49198 start.go:540] Will wait 60s for crictl version
	I1024 20:11:27.160848   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:11:27.164920   49198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:27.202690   49198 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:27.202779   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.250594   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.296108   49198 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:25.950421   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Start
	I1024 20:11:25.950594   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring networks are active...
	I1024 20:11:25.951296   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network default is active
	I1024 20:11:25.951666   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network mk-default-k8s-diff-port-643126 is active
	I1024 20:11:25.952059   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Getting domain xml...
	I1024 20:11:25.952807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Creating domain...
	I1024 20:11:27.231286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting to get IP...
	I1024 20:11:27.232283   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232673   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232749   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.232677   50444 retry.go:31] will retry after 208.58934ms: waiting for machine to come up
	I1024 20:11:27.443376   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443879   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443919   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.443821   50444 retry.go:31] will retry after 257.382495ms: waiting for machine to come up
	I1024 20:11:27.703424   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.703968   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.704002   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.703931   50444 retry.go:31] will retry after 397.047762ms: waiting for machine to come up
	I1024 20:11:28.102593   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103138   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103169   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.103091   50444 retry.go:31] will retry after 512.560427ms: waiting for machine to come up
	I1024 20:11:27.297540   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:27.300396   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.300799   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:27.300829   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.301066   49198 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:27.305045   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:27.320300   49198 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:27.320366   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:27.359702   49198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:27.359766   49198 ssh_runner.go:195] Run: which lz4
	I1024 20:11:27.363540   49198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:27.367559   49198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:27.367583   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:28.616845   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617310   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.617240   50444 retry.go:31] will retry after 674.554893ms: waiting for machine to come up
	I1024 20:11:29.293139   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293640   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293667   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:29.293603   50444 retry.go:31] will retry after 903.982479ms: waiting for machine to come up
	I1024 20:11:30.199764   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200218   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:30.200119   50444 retry.go:31] will retry after 835.036056ms: waiting for machine to come up
	I1024 20:11:31.037123   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037584   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037609   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:31.037524   50444 retry.go:31] will retry after 1.242617103s: waiting for machine to come up
	I1024 20:11:32.281808   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282287   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282312   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:32.282243   50444 retry.go:31] will retry after 1.694327665s: waiting for machine to come up
	I1024 20:11:29.249631   49198 crio.go:444] Took 1.886122 seconds to copy over tarball
	I1024 20:11:29.249712   49198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:32.249370   49198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.999632152s)
	I1024 20:11:32.249396   49198 crio.go:451] Took 2.999736 seconds to extract the tarball
	I1024 20:11:32.249404   49198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:32.290929   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:32.335293   49198 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:32.335313   49198 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:32.335377   49198 ssh_runner.go:195] Run: crio config
	I1024 20:11:32.394988   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:32.395016   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:32.395039   49198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:32.395066   49198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-867165 NodeName:embed-certs-867165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:32.395267   49198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-867165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:32.395363   49198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-867165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:11:32.395412   49198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:32.408764   49198 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:32.408827   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:32.417504   49198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:11:32.433991   49198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:32.450599   49198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1024 20:11:32.467822   49198 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:32.471830   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:32.485398   49198 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165 for IP: 192.168.72.10
	I1024 20:11:32.485440   49198 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:32.485591   49198 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:32.485627   49198 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:32.485692   49198 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/client.key
	I1024 20:11:32.485751   49198 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key.802f554a
	I1024 20:11:32.485787   49198 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key
	I1024 20:11:32.485883   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:32.485913   49198 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:32.485924   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:32.485946   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:32.485974   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:32.485999   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:32.486054   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:32.486664   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:32.510981   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:32.533691   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:32.556372   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:32.578805   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:32.601563   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:32.624846   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:32.648498   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:32.672429   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:32.696146   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:32.719078   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:32.742894   49198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:32.758998   49198 ssh_runner.go:195] Run: openssl version
	I1024 20:11:32.764797   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:32.774075   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778755   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778809   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.784097   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:32.793365   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:32.802532   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806890   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806936   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.812430   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:32.821767   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:32.830930   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835401   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835455   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.840880   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:32.850124   49198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:32.854525   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:32.860161   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:32.866096   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:32.873246   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:32.880430   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:32.887436   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:32.892960   49198 kubeadm.go:404] StartCluster: {Name:embed-certs-867165 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:32.893073   49198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:32.893116   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:32.930748   49198 cri.go:89] found id: ""
	I1024 20:11:32.930817   49198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:32.939716   49198 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:32.939738   49198 kubeadm.go:636] restartCluster start
	I1024 20:11:32.939785   49198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:32.947747   49198 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.948905   49198 kubeconfig.go:92] found "embed-certs-867165" server: "https://192.168.72.10:8443"
	I1024 20:11:32.951235   49198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:32.959165   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.959215   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.970896   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.970912   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.970957   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.980621   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.481345   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:33.481442   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:33.492666   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.979087   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979490   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979520   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:33.979433   50444 retry.go:31] will retry after 1.877176786s: waiting for machine to come up
	I1024 20:11:35.859337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859758   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:35.859683   50444 retry.go:31] will retry after 2.235459842s: waiting for machine to come up
	I1024 20:11:38.097481   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097958   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:38.097878   50444 retry.go:31] will retry after 3.083066899s: waiting for machine to come up
	I1024 20:11:33.981370   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.077568   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.088845   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.481489   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.481554   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.492934   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.981614   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.981744   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.993154   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.480679   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.480752   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.492474   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.981612   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.981703   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.992389   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.480877   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.480982   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.492142   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.980700   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.980784   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.992439   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.480962   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.481040   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.492219   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.980706   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.980814   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.992040   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:38.481668   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.481764   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.493319   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.182306   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182647   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182674   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:41.182602   50444 retry.go:31] will retry after 3.348794863s: waiting for machine to come up
	I1024 20:11:38.981418   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.981504   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.992810   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.481357   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.481448   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.492521   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.981019   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.981109   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.992766   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.481341   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.481404   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.492180   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.981106   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.981205   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.991931   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.481563   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.481629   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.492601   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.981132   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.981226   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.992061   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.481647   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:42.481713   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:42.492524   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.960175   49198 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:11:42.960230   49198 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:11:42.960243   49198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:11:42.960322   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:42.998685   49198 cri.go:89] found id: ""
	I1024 20:11:42.998794   49198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:11:43.013829   49198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:11:43.023081   49198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:11:43.023161   49198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032165   49198 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032189   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:43.148027   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:45.942484   50077 start.go:369] acquired machines lock for "old-k8s-version-467375" in 2m12.988914754s
	I1024 20:11:45.942540   50077 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:45.942548   50077 fix.go:54] fixHost starting: 
	I1024 20:11:45.942969   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:45.943007   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:45.960424   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I1024 20:11:45.960851   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:45.961468   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:11:45.961498   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:45.961852   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:45.962045   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:11:45.962231   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:11:45.963803   50077 fix.go:102] recreateIfNeeded on old-k8s-version-467375: state=Stopped err=<nil>
	I1024 20:11:45.963841   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	W1024 20:11:45.964018   50077 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:45.965809   50077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467375" ...
	I1024 20:11:44.535120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535710   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Found IP for machine: 192.168.61.148
	I1024 20:11:44.535735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has current primary IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserving static IP address...
	I1024 20:11:44.536160   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserved static IP address: 192.168.61.148
	I1024 20:11:44.536181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for SSH to be available...
	I1024 20:11:44.536196   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.536225   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | skip adding static IP to network mk-default-k8s-diff-port-643126 - found existing host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"}
	I1024 20:11:44.536247   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Getting to WaitForSSH function...
	I1024 20:11:44.538297   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538634   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.538669   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH client type: external
	I1024 20:11:44.538846   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa (-rw-------)
	I1024 20:11:44.538897   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:44.538935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | About to run SSH command:
	I1024 20:11:44.538947   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | exit 0
	I1024 20:11:44.629136   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:44.629505   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetConfigRaw
	I1024 20:11:44.630190   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.632462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.632782   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.632807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.633035   49708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/config.json ...
	I1024 20:11:44.633215   49708 machine.go:88] provisioning docker machine ...
	I1024 20:11:44.633231   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:44.633416   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633566   49708 buildroot.go:166] provisioning hostname "default-k8s-diff-port-643126"
	I1024 20:11:44.633580   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633778   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.635853   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636184   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.636217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636295   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.636462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636608   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.636890   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.637307   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.637328   49708 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643126 && echo "default-k8s-diff-port-643126" | sudo tee /etc/hostname
	I1024 20:11:44.775436   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643126
	
	I1024 20:11:44.775468   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.778835   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779280   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.779316   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779494   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.779679   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779933   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.780147   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.780489   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.780518   49708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643126/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:44.921274   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:44.921332   49708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:44.921368   49708 buildroot.go:174] setting up certificates
	I1024 20:11:44.921385   49708 provision.go:83] configureAuth start
	I1024 20:11:44.921404   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.921747   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.924977   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925413   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.925443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925641   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.928106   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.928484   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928617   49708 provision.go:138] copyHostCerts
	I1024 20:11:44.928680   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:44.928703   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:44.928772   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:44.928918   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:44.928935   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:44.928969   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:44.929052   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:44.929063   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:44.929089   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:44.929157   49708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643126 san=[192.168.61.148 192.168.61.148 localhost 127.0.0.1 minikube default-k8s-diff-port-643126]
	I1024 20:11:45.170614   49708 provision.go:172] copyRemoteCerts
	I1024 20:11:45.170679   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:45.170706   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.173876   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.174251   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.174744   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.174909   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.175033   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.266012   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 20:11:45.294626   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:11:45.323773   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:45.347515   49708 provision.go:86] duration metric: configureAuth took 426.107365ms
	I1024 20:11:45.347536   49708 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:45.347741   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:45.347830   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.351151   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.351560   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351729   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.351898   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.352540   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.353017   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.353045   49708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:45.673767   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:45.673797   49708 machine.go:91] provisioned docker machine in 1.04057128s
	I1024 20:11:45.673809   49708 start.go:300] post-start starting for "default-k8s-diff-port-643126" (driver="kvm2")
	I1024 20:11:45.673821   49708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:45.673844   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.674180   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:45.674213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.677192   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677621   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.677663   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677817   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.678021   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.678180   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.678322   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.769507   49708 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:45.774136   49708 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:45.774161   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:45.774240   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:45.774333   49708 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:45.774456   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:45.782941   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:45.806536   49708 start.go:303] post-start completed in 132.710109ms
	I1024 20:11:45.806565   49708 fix.go:56] fixHost completed within 19.880653804s
	I1024 20:11:45.806602   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.809496   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.809854   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.809892   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.810096   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.810335   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810697   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.810870   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.811297   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.811312   49708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:45.942309   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178305.886866858
	
	I1024 20:11:45.942334   49708 fix.go:206] guest clock: 1698178305.886866858
	I1024 20:11:45.942343   49708 fix.go:219] Guest: 2023-10-24 20:11:45.886866858 +0000 UTC Remote: 2023-10-24 20:11:45.806569839 +0000 UTC m=+222.349889294 (delta=80.297019ms)
	I1024 20:11:45.942388   49708 fix.go:190] guest clock delta is within tolerance: 80.297019ms
	I1024 20:11:45.942399   49708 start.go:83] releasing machines lock for "default-k8s-diff-port-643126", held for 20.016514097s
	I1024 20:11:45.942428   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.942819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:45.946079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946507   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.946548   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946681   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947353   49708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:45.947411   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.947564   49708 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:45.947591   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.950504   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.950930   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.950984   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951010   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951176   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.951499   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.951522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.951526   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951638   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.951793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951946   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.952178   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.952345   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:46.043544   49708 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:46.072510   49708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:46.230010   49708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:46.237538   49708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:46.237608   49708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:46.259449   49708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:46.259468   49708 start.go:472] detecting cgroup driver to use...
	I1024 20:11:46.259530   49708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:46.278708   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:46.292769   49708 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:46.292827   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:46.311808   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:46.329420   49708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:46.452375   49708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:46.584041   49708 docker.go:214] disabling docker service ...
	I1024 20:11:46.584114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:46.606114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:46.623302   49708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:46.732771   49708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:46.862687   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:46.879573   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:46.900885   49708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:46.900955   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.911441   49708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:46.911500   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.921674   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.931937   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.942104   49708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:46.952610   49708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:46.961808   49708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:46.961884   49708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:46.977789   49708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:46.990089   49708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:47.130248   49708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:47.307336   49708 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:47.307402   49708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:47.316743   49708 start.go:540] Will wait 60s for crictl version
	I1024 20:11:47.316795   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:11:47.321526   49708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:47.369079   49708 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:47.369169   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.419428   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.477016   49708 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:45.967071   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Start
	I1024 20:11:45.967249   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring networks are active...
	I1024 20:11:45.967957   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network default is active
	I1024 20:11:45.968324   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network mk-old-k8s-version-467375 is active
	I1024 20:11:45.968743   50077 main.go:141] libmachine: (old-k8s-version-467375) Getting domain xml...
	I1024 20:11:45.969525   50077 main.go:141] libmachine: (old-k8s-version-467375) Creating domain...
	I1024 20:11:47.346548   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting to get IP...
	I1024 20:11:47.347505   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.347894   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.347980   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.347887   50579 retry.go:31] will retry after 232.244798ms: waiting for machine to come up
	I1024 20:11:47.581582   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.582087   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.582118   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.582044   50579 retry.go:31] will retry after 319.930019ms: waiting for machine to come up
	I1024 20:11:47.478565   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:47.481659   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482040   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:47.482066   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482265   49708 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:47.487054   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:47.499693   49708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:47.499765   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:47.551897   49708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:47.551978   49708 ssh_runner.go:195] Run: which lz4
	I1024 20:11:47.557026   49708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:47.562364   49708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:47.562393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:43.852350   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.048386   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.117774   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.202966   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:11:44.203042   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.215680   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.726471   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.226100   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.726494   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.226510   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.726607   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.758294   49198 api_server.go:72] duration metric: took 2.555329199s to wait for apiserver process to appear ...
	I1024 20:11:46.758319   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:11:46.758339   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.758872   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:46.758909   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.759318   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:47.260047   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.910793   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.910830   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:50.910852   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.943069   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.943100   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:51.259498   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.265278   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.265316   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:51.759494   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.767253   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.767280   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:52.259758   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:52.265202   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:11:52.277533   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:11:52.277561   49198 api_server.go:131] duration metric: took 5.51923389s to wait for apiserver health ...
	I1024 20:11:52.277572   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:52.277580   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:52.279542   49198 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:11:47.904065   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.904524   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.904551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.904467   50579 retry.go:31] will retry after 440.170251ms: waiting for machine to come up
	I1024 20:11:48.346206   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.346778   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.346802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.346686   50579 retry.go:31] will retry after 472.001777ms: waiting for machine to come up
	I1024 20:11:48.820100   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.820625   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.820660   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.820533   50579 retry.go:31] will retry after 487.055032ms: waiting for machine to come up
	I1024 20:11:49.309351   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:49.309816   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:49.309836   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:49.309751   50579 retry.go:31] will retry after 945.474211ms: waiting for machine to come up
	I1024 20:11:50.257106   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:50.257611   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:50.257641   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:50.257563   50579 retry.go:31] will retry after 915.312538ms: waiting for machine to come up
	I1024 20:11:51.174245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:51.174832   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:51.174889   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:51.174792   50579 retry.go:31] will retry after 1.09533855s: waiting for machine to come up
	I1024 20:11:52.271604   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:52.272082   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:52.272111   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:52.272041   50579 retry.go:31] will retry after 1.411155014s: waiting for machine to come up
	I1024 20:11:49.517078   49708 crio.go:444] Took 1.960093 seconds to copy over tarball
	I1024 20:11:49.517170   49708 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:53.113830   49708 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.596633239s)
	I1024 20:11:53.113858   49708 crio.go:451] Took 3.596755 seconds to extract the tarball
	I1024 20:11:53.113865   49708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:53.157476   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:53.204980   49708 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:53.205004   49708 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:53.205090   49708 ssh_runner.go:195] Run: crio config
	I1024 20:11:53.264588   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:11:53.264613   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:53.264634   49708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:53.264662   49708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.148 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643126 NodeName:default-k8s-diff-port-643126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:53.264869   49708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643126"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:53.264975   49708 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-643126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1024 20:11:53.265054   49708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:53.275886   49708 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:53.275982   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:53.286132   49708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1024 20:11:53.303735   49708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:53.319522   49708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1024 20:11:53.338388   49708 ssh_runner.go:195] Run: grep 192.168.61.148	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:53.343108   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:53.355662   49708 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126 for IP: 192.168.61.148
	I1024 20:11:53.355709   49708 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:53.355873   49708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:53.355910   49708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:53.356023   49708 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.key
	I1024 20:11:53.356086   49708 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key.8ba5a111
	I1024 20:11:53.356122   49708 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key
	I1024 20:11:53.356237   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:53.356265   49708 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:53.356275   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:53.356299   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:53.356320   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:53.356341   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:53.356377   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:53.357029   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:53.379968   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:53.401871   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:53.423699   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:53.445338   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:53.469994   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:53.495061   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:52.281055   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:11:52.299421   49198 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:11:52.322020   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:11:52.334273   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:11:52.334318   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:11:52.334332   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:11:52.334356   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:11:52.334372   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:11:52.334389   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:11:52.334401   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:11:52.334413   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:11:52.334425   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:11:52.334438   49198 system_pods.go:74] duration metric: took 12.395036ms to wait for pod list to return data ...
	I1024 20:11:52.334450   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:11:52.338486   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:11:52.338518   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:11:52.338530   49198 node_conditions.go:105] duration metric: took 4.073559ms to run NodePressure ...
	I1024 20:11:52.338555   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:55.075569   49198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.736987276s)
	I1024 20:11:55.075611   49198 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080481   49198 kubeadm.go:787] kubelet initialised
	I1024 20:11:55.080508   49198 kubeadm.go:788] duration metric: took 4.884507ms waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080519   49198 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:55.087371   49198 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.092583   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092616   49198 pod_ready.go:81] duration metric: took 5.215308ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.092627   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092636   49198 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.098518   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098540   49198 pod_ready.go:81] duration metric: took 5.887969ms waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.098551   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098560   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.103375   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103400   49198 pod_ready.go:81] duration metric: took 4.83092ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.103411   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103419   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.108416   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108443   49198 pod_ready.go:81] duration metric: took 5.016219ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.108454   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108462   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.482846   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482873   49198 pod_ready.go:81] duration metric: took 374.401616ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.482885   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482897   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.879895   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879922   49198 pod_ready.go:81] duration metric: took 397.016576ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.879935   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879947   49198 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:56.280405   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280445   49198 pod_ready.go:81] duration metric: took 400.488591ms waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:56.280464   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280475   49198 pod_ready.go:38] duration metric: took 1.19994252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:56.280498   49198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:11:56.298423   49198 ops.go:34] apiserver oom_adj: -16
	I1024 20:11:56.298445   49198 kubeadm.go:640] restartCluster took 23.358699894s
	I1024 20:11:56.298455   49198 kubeadm.go:406] StartCluster complete in 23.405500606s
	I1024 20:11:56.298474   49198 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.298551   49198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:11:56.300724   49198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.300999   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:11:56.301104   49198 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:11:56.301193   49198 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-867165"
	I1024 20:11:56.301203   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:56.301216   49198 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-867165"
	W1024 20:11:56.301261   49198 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:11:56.301260   49198 addons.go:69] Setting metrics-server=true in profile "embed-certs-867165"
	I1024 20:11:56.301290   49198 addons.go:69] Setting default-storageclass=true in profile "embed-certs-867165"
	I1024 20:11:56.301312   49198 addons.go:231] Setting addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:56.301315   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	W1024 20:11:56.301328   49198 addons.go:240] addon metrics-server should already be in state true
	I1024 20:11:56.301331   49198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-867165"
	I1024 20:11:56.301418   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.301743   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301744   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301767   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301771   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301826   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301867   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.307030   49198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-867165" context rescaled to 1 replicas
	I1024 20:11:56.307062   49198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:11:56.309053   49198 out.go:177] * Verifying Kubernetes components...
	I1024 20:11:56.310743   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:11:56.317523   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1024 20:11:56.317889   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.318430   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.318450   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.318881   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.319437   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.319486   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.320723   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1024 20:11:56.320906   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1024 20:11:56.321377   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.321491   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.322079   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322107   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322370   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322389   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322464   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322770   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.323410   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.323444   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.326654   49198 addons.go:231] Setting addon default-storageclass=true in "embed-certs-867165"
	W1024 20:11:56.326674   49198 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:11:56.326700   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.327084   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.327111   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.335811   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I1024 20:11:56.336310   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.336762   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.336774   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.337109   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.337272   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.338868   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.340964   49198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:11:56.342438   49198 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.342454   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:11:56.342472   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.341955   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I1024 20:11:56.343402   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.344019   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.344038   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.344502   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.344694   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.345753   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346097   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I1024 20:11:56.346367   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.346398   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346660   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.346666   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.346829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.348534   49198 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:11:53.684729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:53.685093   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:53.685129   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:53.685030   50579 retry.go:31] will retry after 1.793178726s: waiting for machine to come up
	I1024 20:11:55.481150   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:55.481696   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:55.481729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:55.481639   50579 retry.go:31] will retry after 2.680463816s: waiting for machine to come up
	I1024 20:11:56.347164   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.347192   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.350114   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.350141   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:11:56.350155   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:11:56.350174   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.350270   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.350397   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.350847   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.351478   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.351514   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.354060   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354451   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.354472   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354625   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.354819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.354978   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.355161   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.371309   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1024 20:11:56.371746   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.372300   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.372325   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.372764   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.372981   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.374651   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.374894   49198 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.374911   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:11:56.374934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.377962   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378385   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.378408   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378585   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.378789   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.378954   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.379083   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.471271   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.504355   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:11:56.504382   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:11:56.552351   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.576037   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:11:56.576068   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:11:56.606745   49198 node_ready.go:35] waiting up to 6m0s for node "embed-certs-867165" to be "Ready" ...
	I1024 20:11:56.606772   49198 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:11:56.620862   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:56.620897   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:11:56.676519   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:57.851757   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.380440836s)
	I1024 20:11:57.851814   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851816   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299429923s)
	I1024 20:11:57.851829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.851865   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851882   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852242   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852262   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852272   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852282   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852368   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852412   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852441   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852467   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852412   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852537   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852560   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852814   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852859   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852877   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860105   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183533543s)
	I1024 20:11:57.860176   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860195   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860492   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860494   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860515   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860526   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860537   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860828   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860857   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860876   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860886   49198 addons.go:467] Verifying addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:57.860990   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.861011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.861220   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.861227   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.861236   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.864370   49198 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1024 20:11:53.521030   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:53.844700   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:53.868393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:53.892495   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:53.916345   49708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:53.935576   49708 ssh_runner.go:195] Run: openssl version
	I1024 20:11:53.943066   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:53.957325   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.962959   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.963026   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.969104   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:53.980253   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:53.990977   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995906   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995992   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:54.001847   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:54.012635   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:54.023490   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028300   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028355   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.033965   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:54.044984   49708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:54.049588   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:54.055434   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:54.061692   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:54.068131   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:54.074484   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:54.080349   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:54.086499   49708 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-643126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:54.086598   49708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:54.086655   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:54.127406   49708 cri.go:89] found id: ""
	I1024 20:11:54.127494   49708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:54.137720   49708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:54.137743   49708 kubeadm.go:636] restartCluster start
	I1024 20:11:54.137801   49708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:54.147925   49708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.149006   49708 kubeconfig.go:92] found "default-k8s-diff-port-643126" server: "https://192.168.61.148:8444"
	I1024 20:11:54.151513   49708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:54.162303   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.162371   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.173715   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.173763   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.173816   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.184641   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.685342   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.685431   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.698640   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.185173   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.185284   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.201355   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.684814   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.684885   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.696664   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.185711   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.185795   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.201419   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.684932   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.685029   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.701458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.185009   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.185111   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.201166   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.685654   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.685739   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.701496   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:58.185022   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.185076   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.197394   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.865715   49198 addons.go:502] enable addons completed in 1.564611111s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1024 20:11:58.683275   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:58.163942   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:58.164342   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:58.164369   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:58.164308   50579 retry.go:31] will retry after 2.238050336s: waiting for machine to come up
	I1024 20:12:00.403552   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:00.403947   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:00.403975   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:00.403907   50579 retry.go:31] will retry after 3.901299207s: waiting for machine to come up
	I1024 20:11:58.685131   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.685225   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.700458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.184854   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.184936   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.200498   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.685159   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.685260   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.698793   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.185350   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.185418   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.200046   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.685255   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.685341   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.698229   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.185036   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.185105   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.200083   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.685617   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.685700   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.697442   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.184897   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.184980   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.196208   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.685769   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.685854   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.697356   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:03.184898   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.184977   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.196522   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.684425   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:12:01.683130   49198 node_ready.go:49] node "embed-certs-867165" has status "Ready":"True"
	I1024 20:12:01.683154   49198 node_ready.go:38] duration metric: took 5.076371929s waiting for node "embed-certs-867165" to be "Ready" ...
	I1024 20:12:01.683162   49198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:01.689566   49198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695393   49198 pod_ready.go:92] pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:01.695416   49198 pod_ready.go:81] duration metric: took 5.827696ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695427   49198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:03.712775   49198 pod_ready.go:102] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:04.306338   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:04.306804   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:04.306835   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:04.306770   50579 retry.go:31] will retry after 5.15211395s: waiting for machine to come up
	I1024 20:12:03.685737   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.685827   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.697510   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:04.163385   49708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:04.163416   49708 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:04.163449   49708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:04.163520   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:04.209780   49708 cri.go:89] found id: ""
	I1024 20:12:04.209834   49708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:04.226347   49708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:04.235134   49708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:04.235185   49708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243361   49708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243380   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:04.370510   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.461155   49708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090606159s)
	I1024 20:12:05.461192   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.649281   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.742338   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.829426   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:05.829494   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:05.841869   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.356907   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.856157   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.356140   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.856020   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.356129   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.382595   49708 api_server.go:72] duration metric: took 2.553177252s to wait for apiserver process to appear ...
	I1024 20:12:08.382622   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:08.382641   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:04.213550   49198 pod_ready.go:92] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.213573   49198 pod_ready.go:81] duration metric: took 2.518138084s waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.213585   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218813   49198 pod_ready.go:92] pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.218841   49198 pod_ready.go:81] duration metric: took 5.247061ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218855   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224562   49198 pod_ready.go:92] pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.224585   49198 pod_ready.go:81] duration metric: took 5.720637ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224597   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484197   49198 pod_ready.go:92] pod "kube-proxy-thkqr" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.484216   49198 pod_ready.go:81] duration metric: took 259.611869ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484224   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883941   49198 pod_ready.go:92] pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.883968   49198 pod_ready.go:81] duration metric: took 399.73679ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883982   49198 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:07.193414   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:10.878419   49071 start.go:369] acquired machines lock for "no-preload-014826" in 1m0.065559113s
	I1024 20:12:10.878467   49071 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:12:10.878475   49071 fix.go:54] fixHost starting: 
	I1024 20:12:10.878869   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:10.878901   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:10.898307   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I1024 20:12:10.898732   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:10.899250   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:12:10.899268   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:10.899614   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:10.899790   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:10.899933   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:12:10.901569   49071 fix.go:102] recreateIfNeeded on no-preload-014826: state=Stopped err=<nil>
	I1024 20:12:10.901593   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	W1024 20:12:10.901753   49071 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:12:10.904367   49071 out.go:177] * Restarting existing kvm2 VM for "no-preload-014826" ...
	I1024 20:12:09.462373   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.462813   50077 main.go:141] libmachine: (old-k8s-version-467375) Found IP for machine: 192.168.39.71
	I1024 20:12:09.462836   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserving static IP address...
	I1024 20:12:09.462853   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has current primary IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.463385   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.463423   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserved static IP address: 192.168.39.71
	I1024 20:12:09.463442   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | skip adding static IP to network mk-old-k8s-version-467375 - found existing host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"}
	I1024 20:12:09.463463   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Getting to WaitForSSH function...
	I1024 20:12:09.463484   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting for SSH to be available...
	I1024 20:12:09.465635   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.465951   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.465979   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.466131   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH client type: external
	I1024 20:12:09.466167   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa (-rw-------)
	I1024 20:12:09.466210   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:09.466227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | About to run SSH command:
	I1024 20:12:09.466256   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | exit 0
	I1024 20:12:09.565274   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:09.565647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetConfigRaw
	I1024 20:12:09.566251   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.569078   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.569585   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569863   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:12:09.570097   50077 machine.go:88] provisioning docker machine ...
	I1024 20:12:09.570122   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:09.570355   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570604   50077 buildroot.go:166] provisioning hostname "old-k8s-version-467375"
	I1024 20:12:09.570634   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570807   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.573170   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573560   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.573587   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573757   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.573934   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574080   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.574414   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.574840   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.574858   50077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467375 && echo "old-k8s-version-467375" | sudo tee /etc/hostname
	I1024 20:12:09.718150   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467375
	
	I1024 20:12:09.718201   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.721079   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721461   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.721495   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721653   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.721865   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722016   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722167   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.722324   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.722712   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.722732   50077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467375/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:09.865069   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:09.865098   50077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:09.865125   50077 buildroot.go:174] setting up certificates
	I1024 20:12:09.865136   50077 provision.go:83] configureAuth start
	I1024 20:12:09.865151   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.865449   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.868055   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868480   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.868513   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868693   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.870838   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871203   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.871227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871363   50077 provision.go:138] copyHostCerts
	I1024 20:12:09.871411   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:09.871423   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:09.871490   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:09.871613   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:09.871625   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:09.871655   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:09.871743   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:09.871753   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:09.871783   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:09.871856   50077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467375 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube old-k8s-version-467375]
	I1024 20:12:10.091178   50077 provision.go:172] copyRemoteCerts
	I1024 20:12:10.091229   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:10.091253   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.094245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094550   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.094590   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094759   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.094955   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.095123   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.095271   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.192715   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:12:10.216110   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:10.239468   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:12:10.263113   50077 provision.go:86] duration metric: configureAuth took 397.957727ms
	I1024 20:12:10.263138   50077 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:10.263366   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:12:10.263480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.265995   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266293   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.266334   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266467   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.266696   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.266863   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.267027   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.267168   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.267653   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.267677   50077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:10.596009   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:10.596032   50077 machine.go:91] provisioned docker machine in 1.025920355s
	I1024 20:12:10.596041   50077 start.go:300] post-start starting for "old-k8s-version-467375" (driver="kvm2")
	I1024 20:12:10.596050   50077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:10.596075   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.596415   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:10.596450   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.598886   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599234   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.599259   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599446   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.599647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.599812   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.599955   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.697045   50077 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:10.701363   50077 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:10.701387   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:10.701458   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:10.701546   50077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:10.701653   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:10.712072   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:10.737471   50077 start.go:303] post-start completed in 141.415073ms
	I1024 20:12:10.737508   50077 fix.go:56] fixHost completed within 24.794946143s
	I1024 20:12:10.737533   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.740438   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.740792   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.740820   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.741024   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.741247   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741428   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741691   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.741861   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.742407   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.742431   50077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:10.878250   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178330.824734287
	
	I1024 20:12:10.878273   50077 fix.go:206] guest clock: 1698178330.824734287
	I1024 20:12:10.878283   50077 fix.go:219] Guest: 2023-10-24 20:12:10.824734287 +0000 UTC Remote: 2023-10-24 20:12:10.737513672 +0000 UTC m=+157.935911605 (delta=87.220615ms)
	I1024 20:12:10.878307   50077 fix.go:190] guest clock delta is within tolerance: 87.220615ms
	I1024 20:12:10.878314   50077 start.go:83] releasing machines lock for "old-k8s-version-467375", held for 24.935800385s
	I1024 20:12:10.878347   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.878614   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:10.881335   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881746   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.881784   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881933   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882442   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882741   50077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:10.882801   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.882860   50077 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:10.882886   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.885640   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.885856   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886047   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886070   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886276   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886315   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886383   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886439   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886535   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886579   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886683   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886699   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.886816   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:11.006700   50077 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:11.012734   50077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:11.162399   50077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:11.169673   50077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:11.169751   50077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:11.184770   50077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:11.184794   50077 start.go:472] detecting cgroup driver to use...
	I1024 20:12:11.184858   50077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:11.202317   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:11.218122   50077 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:11.218187   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:11.233177   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:11.247591   50077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:11.387195   50077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:11.520544   50077 docker.go:214] disabling docker service ...
	I1024 20:12:11.520615   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:11.539166   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:11.552957   50077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:11.710494   50077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:11.837532   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:11.854418   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:11.874953   50077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 20:12:11.875040   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.887115   50077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:11.887206   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.898994   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.908652   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.918280   50077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:11.930870   50077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:11.939522   50077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:11.939580   50077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:11.955005   50077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:11.965173   50077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:12.098480   50077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:12.296897   50077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:12.296993   50077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:12.302906   50077 start.go:540] Will wait 60s for crictl version
	I1024 20:12:12.302956   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:12.307142   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:12.353253   50077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:12.353369   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.417241   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.486375   50077 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1024 20:12:12.487819   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:12.491366   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.491830   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:12.491862   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.492054   50077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:12.497705   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:12.514116   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:12:12.514208   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:12.569171   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:12.569247   50077 ssh_runner.go:195] Run: which lz4
	I1024 20:12:12.574729   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:12:12.579319   50077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:12:12.579364   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 20:12:10.905856   49071 main.go:141] libmachine: (no-preload-014826) Calling .Start
	I1024 20:12:10.906027   49071 main.go:141] libmachine: (no-preload-014826) Ensuring networks are active...
	I1024 20:12:10.906761   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network default is active
	I1024 20:12:10.907112   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network mk-no-preload-014826 is active
	I1024 20:12:10.907486   49071 main.go:141] libmachine: (no-preload-014826) Getting domain xml...
	I1024 20:12:10.908225   49071 main.go:141] libmachine: (no-preload-014826) Creating domain...
	I1024 20:12:12.324832   49071 main.go:141] libmachine: (no-preload-014826) Waiting to get IP...
	I1024 20:12:12.326055   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.326595   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.326695   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.326594   50821 retry.go:31] will retry after 197.462386ms: waiting for machine to come up
	I1024 20:12:12.526293   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.526743   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.526774   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.526720   50821 retry.go:31] will retry after 271.486585ms: waiting for machine to come up
	I1024 20:12:12.800360   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.801756   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.801940   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.801863   50821 retry.go:31] will retry after 486.882671ms: waiting for machine to come up
	I1024 20:12:12.479397   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.479431   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.479445   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:12.490441   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.490470   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.990764   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.006526   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.006556   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:13.490974   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.499731   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.499764   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:09.195216   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:11.694410   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.698362   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.991467   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:14.011775   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:12:14.048756   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:12:14.048791   49708 api_server.go:131] duration metric: took 5.666161032s to wait for apiserver health ...
	I1024 20:12:14.048802   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:12:14.048812   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:14.050652   49708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:14.052331   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:14.086953   49708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:14.142753   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:14.162085   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:12:14.162211   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:14.162246   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:12:14.162280   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:12:14.162307   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:12:14.162330   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:12:14.162352   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:12:14.162375   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:12:14.162411   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:12:14.162434   49708 system_pods.go:74] duration metric: took 19.657104ms to wait for pod list to return data ...
	I1024 20:12:14.162456   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:14.173042   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:14.173078   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:14.173093   49708 node_conditions.go:105] duration metric: took 10.618815ms to run NodePressure ...
	I1024 20:12:14.173117   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:14.763495   49708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768626   49708 kubeadm.go:787] kubelet initialised
	I1024 20:12:14.768653   49708 kubeadm.go:788] duration metric: took 5.128553ms waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768663   49708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:14.788128   49708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.800546   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800582   49708 pod_ready.go:81] duration metric: took 12.417978ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.800597   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800610   49708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.808416   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808448   49708 pod_ready.go:81] duration metric: took 7.821099ms waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.808463   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808472   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.814286   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814317   49708 pod_ready.go:81] duration metric: took 5.833548ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.814331   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814341   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.825548   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825582   49708 pod_ready.go:81] duration metric: took 11.230382ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.825596   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825606   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.168279   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168323   49708 pod_ready.go:81] duration metric: took 342.707312ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.168338   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168351   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.567697   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567735   49708 pod_ready.go:81] duration metric: took 399.371702ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.567750   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567838   49708 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.967716   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967750   49708 pod_ready.go:81] duration metric: took 399.892272ms waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.967764   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967773   49708 pod_ready.go:38] duration metric: took 1.199098599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:15.967793   49708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:12:15.986399   49708 ops.go:34] apiserver oom_adj: -16
	I1024 20:12:15.986422   49708 kubeadm.go:640] restartCluster took 21.848673162s
	I1024 20:12:15.986430   49708 kubeadm.go:406] StartCluster complete in 21.899940105s
	I1024 20:12:15.986444   49708 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.986545   49708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:12:15.989108   49708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.989647   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:15.989617   49708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:12:15.989715   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:12:15.989719   49708 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989736   49708 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-643126"
	W1024 20:12:15.989752   49708 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:12:15.989752   49708 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989775   49708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643126"
	I1024 20:12:15.989786   49708 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989802   49708 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:15.989804   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	W1024 20:12:15.989809   49708 addons.go:240] addon metrics-server should already be in state true
	I1024 20:12:15.989849   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:15.990183   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990192   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990246   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990294   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990209   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.995810   49708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643126" context rescaled to 1 replicas
	I1024 20:12:15.995838   49708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:12:15.998001   49708 out.go:177] * Verifying Kubernetes components...
	I1024 20:12:16.001589   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:12:16.010690   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I1024 20:12:16.011310   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.011861   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.011890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.012279   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.012906   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.012960   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.013706   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1024 20:12:16.014057   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.014533   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.014560   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.014905   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.015330   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I1024 20:12:16.015444   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.015486   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.015703   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.016168   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.016188   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.016591   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.016763   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.020428   49708 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-643126"
	W1024 20:12:16.020448   49708 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:12:16.020474   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:16.020840   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.020873   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.031538   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I1024 20:12:16.033822   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.034350   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.034367   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.034746   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.034802   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I1024 20:12:16.034978   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.035073   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.035525   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.035549   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.035943   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.036217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.036694   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.038891   49708 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:12:16.037871   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.040815   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:12:16.040832   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:12:16.040851   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.042238   49708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:14.393634   50077 crio.go:444] Took 1.818945 seconds to copy over tarball
	I1024 20:12:14.393720   50077 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:12:17.795931   50077 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.402175992s)
	I1024 20:12:17.795962   50077 crio.go:451] Took 3.402303 seconds to extract the tarball
	I1024 20:12:17.795974   50077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:12:17.841100   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:16.043742   49708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.043758   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:12:16.043775   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.046924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.047003   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047035   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.047068   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047224   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.049392   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049433   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.049469   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049487   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1024 20:12:16.049492   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.049976   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.050488   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.050502   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.050534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.050712   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.050810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.050844   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.050974   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.051292   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.051327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.051585   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.067412   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I1024 20:12:16.067810   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.068428   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.068445   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.068991   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.069222   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.070923   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.071196   49708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.071219   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:12:16.071238   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.074735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075400   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.075431   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075630   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.075796   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.075935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.076097   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.201177   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:12:16.201198   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:12:16.224757   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.247200   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:12:16.247225   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:12:16.259476   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.324327   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.324354   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:12:16.371331   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.384042   49708 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:16.384367   49708 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:12:17.654459   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.429657283s)
	I1024 20:12:17.654516   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.654529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.654951   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.654978   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.654990   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.655004   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.655016   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.655330   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.655353   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.672310   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.672337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.672693   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.672738   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.672761   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.138719   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.879209719s)
	I1024 20:12:18.138769   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.138783   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.139091   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139103   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139117   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.139132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139322   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139338   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139338   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.203722   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.832303736s)
	I1024 20:12:18.203776   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.203793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204088   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204106   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204118   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.204128   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204348   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.204378   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204393   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204406   49708 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:13.290974   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.291494   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.291524   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.291402   50821 retry.go:31] will retry after 588.738796ms: waiting for machine to come up
	I1024 20:12:13.882058   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.882661   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.882685   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.882577   50821 retry.go:31] will retry after 626.457323ms: waiting for machine to come up
	I1024 20:12:14.510560   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:14.511120   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:14.511159   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:14.511059   50821 retry.go:31] will retry after 848.521213ms: waiting for machine to come up
	I1024 20:12:15.360917   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:15.361423   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:15.361452   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:15.361397   50821 retry.go:31] will retry after 790.780783ms: waiting for machine to come up
	I1024 20:12:16.153815   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:16.154332   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:16.154364   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:16.154274   50821 retry.go:31] will retry after 1.066691012s: waiting for machine to come up
	I1024 20:12:17.222675   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:17.223280   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:17.223309   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:17.223248   50821 retry.go:31] will retry after 1.657285361s: waiting for machine to come up
	I1024 20:12:18.299768   49708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1024 20:12:16.196266   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.197531   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.397703   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:17.907894   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:18.029064   50077 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:18.029174   50077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.029196   50077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.029209   50077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.029219   50077 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.029403   50077 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 20:12:18.029418   50077 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.030719   50077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.030726   50077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.030730   50077 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 20:12:18.030748   50077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.030775   50077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.030801   50077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.030972   50077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.031077   50077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.180435   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.182586   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.185966   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1024 20:12:18.190926   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.196636   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.198176   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.205102   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.285789   50077 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1024 20:12:18.285837   50077 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.285889   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.356595   50077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1024 20:12:18.356639   50077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.356678   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.370773   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.387248   50077 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1024 20:12:18.387295   50077 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.387343   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.387461   50077 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1024 20:12:18.387488   50077 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1024 20:12:18.387530   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400566   50077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1024 20:12:18.400608   50077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.400647   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400660   50077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1024 20:12:18.400705   50077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.400742   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400754   50077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1024 20:12:18.400785   50077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.400812   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400845   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.400814   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.545451   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.545541   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1024 20:12:18.545587   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.545674   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.545724   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.545777   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1024 20:12:18.545734   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1024 20:12:18.683462   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 20:12:18.683513   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1024 20:12:18.683578   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.683656   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1024 20:12:18.683686   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1024 20:12:18.683732   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1024 20:12:18.688916   50077 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1024 20:12:18.688954   50077 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.689040   50077 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1024 20:12:20.355824   50077 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.666754363s)
	I1024 20:12:20.355859   50077 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1024 20:12:20.355920   50077 cache_images.go:92] LoadImages completed in 2.326833316s
	W1024 20:12:20.356004   50077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1024 20:12:20.356080   50077 ssh_runner.go:195] Run: crio config
	I1024 20:12:20.428753   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:20.428775   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:20.428793   50077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:20.428835   50077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467375 NodeName:old-k8s-version-467375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 20:12:20.429015   50077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467375"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467375
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.71:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:20.429115   50077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467375 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:20.429179   50077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1024 20:12:20.440158   50077 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:20.440239   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:20.450883   50077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1024 20:12:20.470913   50077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:20.490653   50077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1024 20:12:20.510287   50077 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:20.514815   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:20.526910   50077 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375 for IP: 192.168.39.71
	I1024 20:12:20.526943   50077 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:20.527172   50077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:20.527227   50077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:20.527313   50077 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.key
	I1024 20:12:20.527401   50077 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key.f4667c0f
	I1024 20:12:20.527458   50077 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key
	I1024 20:12:20.527617   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:20.527658   50077 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:20.527672   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:20.527712   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:20.527768   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:20.527803   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:20.527867   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:20.528563   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:20.561437   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:20.593396   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:20.626812   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 20:12:20.659073   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:20.690934   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:20.723550   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:20.754091   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:20.785078   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:20.813190   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:20.845338   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:20.876594   50077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:20.899560   50077 ssh_runner.go:195] Run: openssl version
	I1024 20:12:20.907482   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:20.922776   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929623   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929693   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.935454   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:20.947494   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:20.958906   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964115   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964177   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.970084   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:20.982477   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:20.995317   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000479   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000568   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.006797   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:21.020161   50077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:21.025037   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:21.033376   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:21.041858   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:21.050119   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:21.058140   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:21.066151   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:21.074299   50077 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:21.074409   50077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:21.074454   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:21.125486   50077 cri.go:89] found id: ""
	I1024 20:12:21.125559   50077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:21.139034   50077 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:21.139058   50077 kubeadm.go:636] restartCluster start
	I1024 20:12:21.139113   50077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:21.151994   50077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.153569   50077 kubeconfig.go:92] found "old-k8s-version-467375" server: "https://192.168.39.71:8443"
	I1024 20:12:21.157114   50077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:21.169908   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.169998   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.186116   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.186138   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.186187   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.201283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.702002   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.702084   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.717499   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.201839   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.201946   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.217814   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.702454   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.702525   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.720944   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:18.882382   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:18.882833   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:18.882869   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:18.882798   50821 retry.go:31] will retry after 1.854607935s: waiting for machine to come up
	I1024 20:12:20.738594   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:20.739327   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:20.739375   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:20.739255   50821 retry.go:31] will retry after 2.774006375s: waiting for machine to come up
	I1024 20:12:18.891092   49708 addons.go:502] enable addons completed in 2.901476764s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1024 20:12:20.898330   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:22.897985   49708 node_ready.go:49] node "default-k8s-diff-port-643126" has status "Ready":"True"
	I1024 20:12:22.898016   49708 node_ready.go:38] duration metric: took 6.51394456s waiting for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:22.898029   49708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:22.907326   49708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915330   49708 pod_ready.go:92] pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:22.915354   49708 pod_ready.go:81] duration metric: took 7.999933ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915366   49708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:20.698011   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.195726   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.201529   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.201620   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.215098   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.701482   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.701572   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.715481   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.201550   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.201610   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.218008   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.701489   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.701591   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.716960   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.201492   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.201558   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.215972   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.701398   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.701506   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.714016   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.201948   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.202018   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.215403   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.701876   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.701948   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.714598   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.202095   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.202161   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.215728   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.702476   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.702589   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.715925   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.514310   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:23.514813   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:23.514850   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:23.514763   50821 retry.go:31] will retry after 3.277478612s: waiting for machine to come up
	I1024 20:12:26.793845   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:26.794291   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:26.794312   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:26.794249   50821 retry.go:31] will retry after 4.518205069s: waiting for machine to come up
	I1024 20:12:24.934951   49708 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.934977   49708 pod_ready.go:81] duration metric: took 2.019602232s waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.934990   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940403   49708 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.940424   49708 pod_ready.go:81] duration metric: took 5.425415ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940437   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805106   49708 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:25.805127   49708 pod_ready.go:81] duration metric: took 864.682784ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805137   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.096987   49708 pod_ready.go:92] pod "kube-proxy-x4zbh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.097025   49708 pod_ready.go:81] duration metric: took 291.86715ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.097040   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497404   49708 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.497425   49708 pod_ready.go:81] duration metric: took 400.376909ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497444   49708 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.694439   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.192955   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.201919   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.201990   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.215407   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:28.701578   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.701658   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.714135   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.202433   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.202553   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.214936   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.702439   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.702499   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.714852   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.202428   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.202500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.214283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.702441   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.702500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.715562   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:31.170652   50077 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:31.170682   50077 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:31.170693   50077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:31.170772   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:31.231971   50077 cri.go:89] found id: ""
	I1024 20:12:31.232068   50077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:31.249451   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:31.261057   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:31.261124   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270878   50077 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270901   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:31.407803   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.357283   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.567466   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.659297   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.745553   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:32.745629   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:32.761052   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:31.314269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314887   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has current primary IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314912   49071 main.go:141] libmachine: (no-preload-014826) Found IP for machine: 192.168.50.162
	I1024 20:12:31.314926   49071 main.go:141] libmachine: (no-preload-014826) Reserving static IP address...
	I1024 20:12:31.315396   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.315434   49071 main.go:141] libmachine: (no-preload-014826) DBG | skip adding static IP to network mk-no-preload-014826 - found existing host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"}
	I1024 20:12:31.315448   49071 main.go:141] libmachine: (no-preload-014826) Reserved static IP address: 192.168.50.162
	I1024 20:12:31.315465   49071 main.go:141] libmachine: (no-preload-014826) Waiting for SSH to be available...
	I1024 20:12:31.315483   49071 main.go:141] libmachine: (no-preload-014826) DBG | Getting to WaitForSSH function...
	I1024 20:12:31.318209   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318611   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.318653   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318819   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH client type: external
	I1024 20:12:31.318871   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa (-rw-------)
	I1024 20:12:31.318916   49071 main.go:141] libmachine: (no-preload-014826) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:31.318941   49071 main.go:141] libmachine: (no-preload-014826) DBG | About to run SSH command:
	I1024 20:12:31.318957   49071 main.go:141] libmachine: (no-preload-014826) DBG | exit 0
	I1024 20:12:31.414054   49071 main.go:141] libmachine: (no-preload-014826) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:31.414566   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetConfigRaw
	I1024 20:12:31.415326   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.418120   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418549   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.418582   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418808   49071 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/config.json ...
	I1024 20:12:31.419009   49071 machine.go:88] provisioning docker machine ...
	I1024 20:12:31.419033   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:31.419222   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419399   49071 buildroot.go:166] provisioning hostname "no-preload-014826"
	I1024 20:12:31.419423   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419578   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.421861   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422241   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.422273   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422501   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.422676   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.422847   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.423066   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.423250   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.423707   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.423724   49071 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-014826 && echo "no-preload-014826" | sudo tee /etc/hostname
	I1024 20:12:31.557472   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-014826
	
	I1024 20:12:31.557504   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.560529   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.560928   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.560979   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.561201   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.561457   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561654   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561817   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.561968   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.562329   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.562357   49071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014826/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:31.694896   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:31.694927   49071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:31.694948   49071 buildroot.go:174] setting up certificates
	I1024 20:12:31.694959   49071 provision.go:83] configureAuth start
	I1024 20:12:31.694967   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.695264   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.697858   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698148   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.698176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698357   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.700982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701332   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.701364   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701570   49071 provision.go:138] copyHostCerts
	I1024 20:12:31.701625   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:31.701642   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:31.701733   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:31.701845   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:31.701857   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:31.701883   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:31.701947   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:31.701956   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:31.701978   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:31.702043   49071 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.no-preload-014826 san=[192.168.50.162 192.168.50.162 localhost 127.0.0.1 minikube no-preload-014826]
	I1024 20:12:31.798568   49071 provision.go:172] copyRemoteCerts
	I1024 20:12:31.798622   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:31.798642   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.801859   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802237   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.802269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802465   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.802672   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.802867   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.803027   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:31.891633   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:31.916451   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 20:12:31.937924   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:12:31.961360   49071 provision.go:86] duration metric: configureAuth took 266.390893ms
	I1024 20:12:31.961384   49071 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:31.961573   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:31.961660   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.964354   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964662   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.964719   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.965002   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965170   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965329   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.965516   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.965961   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.965983   49071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:32.275884   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:32.275911   49071 machine.go:91] provisioned docker machine in 856.887593ms
	I1024 20:12:32.275923   49071 start.go:300] post-start starting for "no-preload-014826" (driver="kvm2")
	I1024 20:12:32.275935   49071 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:32.275957   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.276268   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:32.276298   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.279248   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279642   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.279678   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.279985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.280182   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.280455   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.371931   49071 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:32.375989   49071 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:32.376009   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:32.376077   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:32.376173   49071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:32.376295   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:32.385018   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:32.408697   49071 start.go:303] post-start completed in 132.759815ms
	I1024 20:12:32.408719   49071 fix.go:56] fixHost completed within 21.530244363s
	I1024 20:12:32.408744   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.411800   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412155   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.412189   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412363   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.412574   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412741   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412916   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.413083   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:32.413469   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:32.413483   49071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:32.534092   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178352.477877903
	
	I1024 20:12:32.534116   49071 fix.go:206] guest clock: 1698178352.477877903
	I1024 20:12:32.534127   49071 fix.go:219] Guest: 2023-10-24 20:12:32.477877903 +0000 UTC Remote: 2023-10-24 20:12:32.408724059 +0000 UTC m=+364.183674654 (delta=69.153844ms)
	I1024 20:12:32.534153   49071 fix.go:190] guest clock delta is within tolerance: 69.153844ms
	I1024 20:12:32.534159   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 21.655714466s
	I1024 20:12:32.534185   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.534468   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:32.537523   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.537932   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.537961   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.538160   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538690   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538919   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.539004   49071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:32.539089   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.539138   49071 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:32.539166   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.542176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542308   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542652   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542689   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542714   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542732   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542981   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.542985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.543207   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543214   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543387   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543429   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543573   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.543579   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.631242   49071 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:32.657695   49071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:32.808471   49071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:32.815640   49071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:32.815712   49071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:32.830198   49071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:32.830219   49071 start.go:472] detecting cgroup driver to use...
	I1024 20:12:32.830295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:32.845231   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:32.863283   49071 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:32.863328   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:32.878295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:32.894182   49071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:33.024491   49071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:33.156548   49071 docker.go:214] disabling docker service ...
	I1024 20:12:33.156621   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:33.169940   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:33.182368   49071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:28.804366   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.806145   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.806217   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.193022   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.195173   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:33.297156   49071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:33.434526   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:33.453482   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:33.471594   49071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:12:33.471665   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.481491   49071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:33.481563   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.490505   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.500003   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.509825   49071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:33.524014   49071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:33.532876   49071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:33.532936   49071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:33.545922   49071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:33.554519   49071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:33.661858   49071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:33.867286   49071 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:33.867361   49071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:33.873180   49071 start.go:540] Will wait 60s for crictl version
	I1024 20:12:33.873259   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:33.877238   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:33.918479   49071 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:33.918624   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:33.970986   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:34.026667   49071 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:12:33.278190   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:33.777448   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.277381   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.320204   50077 api_server.go:72] duration metric: took 1.574651034s to wait for apiserver process to appear ...
	I1024 20:12:34.320230   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:34.320258   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.320744   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.320773   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.321162   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.821724   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.028144   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:34.031311   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031699   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:34.031733   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031888   49071 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:34.036386   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:34.052307   49071 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:12:34.052360   49071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:34.099209   49071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:12:34.099236   49071 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:34.099291   49071 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.099414   49071 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.099497   49071 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 20:12:34.099512   49071 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.099547   49071 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.099575   49071 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101069   49071 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.101083   49071 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.101096   49071 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 20:12:34.101077   49071 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101135   49071 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.101147   49071 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.101173   49071 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.101428   49071 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.283586   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.292930   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.294280   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.303296   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1024 20:12:34.314337   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.323356   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.327726   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.373724   49071 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1024 20:12:34.373774   49071 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.373819   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.466499   49071 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1024 20:12:34.466540   49071 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.466582   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.487167   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.489929   49071 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1024 20:12:34.489986   49071 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.490027   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588137   49071 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1024 20:12:34.588178   49071 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.588206   49071 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1024 20:12:34.588231   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588248   49071 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.588286   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588308   49071 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1024 20:12:34.588330   49071 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.588340   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.588358   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588388   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.588410   49071 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1024 20:12:34.588427   49071 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.588447   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588448   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.605099   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.693897   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.694097   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1024 20:12:34.694204   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.707142   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.707184   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.707265   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1024 20:12:34.707388   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:34.707384   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1024 20:12:34.707516   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:34.722106   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1024 20:12:34.722205   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:34.776997   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1024 20:12:34.777019   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777067   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777089   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1024 20:12:34.777180   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:34.804122   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1024 20:12:34.804241   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:34.814486   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 20:12:34.814532   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1024 20:12:34.814567   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1024 20:12:34.814607   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1024 20:12:34.814634   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:38.115460   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (3.338366217s)
	I1024 20:12:38.115492   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1024 20:12:38.115516   49071 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115548   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (3.338341429s)
	I1024 20:12:38.115570   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115586   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1024 20:12:38.115618   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3: (3.311351093s)
	I1024 20:12:38.115644   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1024 20:12:38.115650   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.30100028s)
	I1024 20:12:38.115665   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1024 20:12:34.807460   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.307370   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:34.696540   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.192160   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.822511   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1024 20:12:39.822561   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.734083   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.734125   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.734161   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.777985   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.778037   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.822134   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.042292   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.042343   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.321887   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.363625   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.363682   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.821995   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.828080   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.828114   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:42.321381   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:42.331626   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:12:42.342584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:12:42.342614   50077 api_server.go:131] duration metric: took 8.022377051s to wait for apiserver health ...
	I1024 20:12:42.342626   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:42.342634   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:42.344676   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:42.346118   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:42.363399   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:42.389481   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:42.403326   50077 system_pods.go:59] 7 kube-system pods found
	I1024 20:12:42.403370   50077 system_pods.go:61] "coredns-5644d7b6d9-x567q" [1dc7f1c2-4997-4330-a9bc-b914b1c1db9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:42.403381   50077 system_pods.go:61] "etcd-old-k8s-version-467375" [62c8ab28-033f-43fa-96b2-e127d8d46730] Running
	I1024 20:12:42.403389   50077 system_pods.go:61] "kube-apiserver-old-k8s-version-467375" [87c58a79-9f12-4be3-a450-69aa22674541] Running
	I1024 20:12:42.403398   50077 system_pods.go:61] "kube-controller-manager-old-k8s-version-467375" [6bf66f9f-1431-4b3f-b186-528945c54a63] Running
	I1024 20:12:42.403412   50077 system_pods.go:61] "kube-proxy-jdvck" [d35f42b9-9be8-43ee-8434-3d557e31bfde] Running
	I1024 20:12:42.403418   50077 system_pods.go:61] "kube-scheduler-old-k8s-version-467375" [63ae0d31-ace3-4490-a2e8-ed110e3a1072] Running
	I1024 20:12:42.403424   50077 system_pods.go:61] "storage-provisioner" [9105f8d8-3aa1-422d-acf2-9f83e9ede8af] Running
	I1024 20:12:42.403431   50077 system_pods.go:74] duration metric: took 13.927429ms to wait for pod list to return data ...
	I1024 20:12:42.403440   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:42.408844   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:42.408890   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:42.408905   50077 node_conditions.go:105] duration metric: took 5.459392ms to run NodePressure ...
	I1024 20:12:42.408926   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:42.701645   50077 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:42.707084   50077 retry.go:31] will retry after 366.455415ms: kubelet not initialised
	I1024 20:12:39.807495   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:42.306172   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.193434   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:41.195135   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.694847   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.078083   50077 retry.go:31] will retry after 411.231242ms: kubelet not initialised
	I1024 20:12:43.494711   50077 retry.go:31] will retry after 768.972767ms: kubelet not initialised
	I1024 20:12:44.268690   50077 retry.go:31] will retry after 693.655783ms: kubelet not initialised
	I1024 20:12:45.186580   50077 retry.go:31] will retry after 1.610937297s: kubelet not initialised
	I1024 20:12:46.803897   50077 retry.go:31] will retry after 959.133509ms: kubelet not initialised
	I1024 20:12:47.768260   50077 retry.go:31] will retry after 1.51466069s: kubelet not initialised
	I1024 20:12:45.464752   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.34915976s)
	I1024 20:12:45.464779   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1024 20:12:45.464821   49071 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:45.464899   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:46.936699   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.471766425s)
	I1024 20:12:46.936725   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1024 20:12:46.936750   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:46.936790   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:44.806094   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:46.807137   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:45.696196   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:48.192732   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:49.288179   50077 retry.go:31] will retry after 5.048749504s: kubelet not initialised
	I1024 20:12:49.615688   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.678859869s)
	I1024 20:12:49.615726   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1024 20:12:49.615763   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:49.615840   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:51.387159   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.771279542s)
	I1024 20:12:51.387185   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1024 20:12:51.387209   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:51.387258   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:52.868127   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.480840395s)
	I1024 20:12:52.868158   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1024 20:12:52.868184   49071 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:52.868233   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:49.304156   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:51.305456   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:53.307726   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:50.195756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:52.196133   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.342759   50077 retry.go:31] will retry after 8.402807892s: kubelet not initialised
	I1024 20:12:53.617841   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1024 20:12:53.617883   49071 cache_images.go:123] Successfully loaded all cached images
	I1024 20:12:53.617889   49071 cache_images.go:92] LoadImages completed in 19.518639759s
	I1024 20:12:53.617972   49071 ssh_runner.go:195] Run: crio config
	I1024 20:12:53.677157   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:12:53.677181   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:53.677198   49071 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:53.677215   49071 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.162 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014826 NodeName:no-preload-014826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:12:53.677386   49071 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014826"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:53.677482   49071 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-014826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:53.677552   49071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:12:53.688840   49071 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:53.688904   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:53.700095   49071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:12:53.717176   49071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:53.737316   49071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1024 20:12:53.756100   49071 ssh_runner.go:195] Run: grep 192.168.50.162	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:53.760013   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:53.771571   49071 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826 for IP: 192.168.50.162
	I1024 20:12:53.771601   49071 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:53.771752   49071 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:53.771811   49071 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:53.771896   49071 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.key
	I1024 20:12:53.771975   49071 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key.1b8245f8
	I1024 20:12:53.772056   49071 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key
	I1024 20:12:53.772205   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:53.772250   49071 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:53.772262   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:53.772303   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:53.772333   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:53.772354   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:53.772397   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:53.773081   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:53.797387   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:53.822084   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:53.846401   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:12:53.869361   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:53.891519   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:53.914051   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:53.935925   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:53.958389   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:53.982011   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:54.005921   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:54.029793   49071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:54.047319   49071 ssh_runner.go:195] Run: openssl version
	I1024 20:12:54.053493   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:54.064414   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069060   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069115   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.075137   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:54.088046   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:54.099949   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104810   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104867   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.110617   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:54.122160   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:54.133062   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137858   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137922   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.144146   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:54.155998   49071 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:54.160989   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:54.167441   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:54.173797   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:54.180320   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:54.186876   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:54.193624   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:54.200066   49071 kubeadm.go:404] StartCluster: {Name:no-preload-014826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:54.200165   49071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:54.200202   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:54.253207   49071 cri.go:89] found id: ""
	I1024 20:12:54.253267   49071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:54.264316   49071 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:54.264348   49071 kubeadm.go:636] restartCluster start
	I1024 20:12:54.264404   49071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:54.276382   49071 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.277506   49071 kubeconfig.go:92] found "no-preload-014826" server: "https://192.168.50.162:8443"
	I1024 20:12:54.279888   49071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:54.290005   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.290052   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.302383   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.302400   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.302447   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.315130   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.815483   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.815574   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.827862   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.315372   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.315430   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.328409   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.816079   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.816141   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.829755   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.315782   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.315869   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.329006   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.815526   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.815621   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.828167   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.315692   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.315781   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.328590   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.816175   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.816250   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.832014   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.805830   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.810013   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.692702   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.192210   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.750533   50077 retry.go:31] will retry after 7.667287878s: kubelet not initialised
	I1024 20:12:58.315841   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.315922   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.329743   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:58.815711   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.815779   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.828215   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.315817   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.315924   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.328911   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.815493   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.815583   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.829684   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.316215   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.316294   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.330227   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.815830   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.815901   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.828290   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.315228   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.315319   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.329972   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.815426   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.815495   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.829199   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.315754   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.315834   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.328463   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.816091   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.816175   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.830548   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.304116   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.304336   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:59.193761   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:01.692343   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.693961   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.315186   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.315249   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.327729   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:03.815302   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.815389   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.827308   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:04.290952   49071 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:13:04.290993   49071 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:13:04.291005   49071 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:13:04.291078   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:13:04.333468   49071 cri.go:89] found id: ""
	I1024 20:13:04.333543   49071 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:13:04.351889   49071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:13:04.362176   49071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:13:04.362251   49071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372650   49071 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372683   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:04.495803   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.080838   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.290640   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.379839   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.458741   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:13:05.458843   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.475039   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.997438   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.496596   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.996587   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.496933   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.514268   49071 api_server.go:72] duration metric: took 2.055524654s to wait for apiserver process to appear ...
	I1024 20:13:07.514294   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:13:07.514310   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.514802   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:07.514840   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.515243   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:08.015912   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:04.306097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:06.805484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:05.698099   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:08.196336   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.424613   50077 retry.go:31] will retry after 17.161095389s: kubelet not initialised
	I1024 20:13:12.512885   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.512923   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.512936   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.564368   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.564415   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.564435   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.578188   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.578210   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:13.015415   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.022900   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.022939   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:09.305906   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:11.805107   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.693989   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:12.696233   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:13.515731   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.520510   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.520565   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:14.015693   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:14.021308   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:13:14.029247   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:13:14.029271   49071 api_server.go:131] duration metric: took 6.514969351s to wait for apiserver health ...
	I1024 20:13:14.029281   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:13:14.029289   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:13:14.031023   49071 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:13:14.032390   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:13:14.042542   49071 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:13:14.061827   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:13:14.077006   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:13:14.077041   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:13:14.077058   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:13:14.077068   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:13:14.077078   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:13:14.077088   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:13:14.077102   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:13:14.077114   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:13:14.077125   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:13:14.077140   49071 system_pods.go:74] duration metric: took 15.296766ms to wait for pod list to return data ...
	I1024 20:13:14.077150   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:13:14.080871   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:13:14.080896   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:13:14.080908   49071 node_conditions.go:105] duration metric: took 3.7473ms to run NodePressure ...
	I1024 20:13:14.080921   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:14.292868   49071 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297583   49071 kubeadm.go:787] kubelet initialised
	I1024 20:13:14.297611   49071 kubeadm.go:788] duration metric: took 4.717728ms waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297621   49071 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:14.303742   49071 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.309570   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309600   49071 pod_ready.go:81] duration metric: took 5.835917ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.309608   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309616   49071 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.316423   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316453   49071 pod_ready.go:81] duration metric: took 6.829373ms waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.316577   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316593   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.325238   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325271   49071 pod_ready.go:81] duration metric: took 8.669582ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.325280   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325288   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.466293   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466319   49071 pod_ready.go:81] duration metric: took 141.023699ms waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.466331   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466342   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.865820   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865855   49071 pod_ready.go:81] duration metric: took 399.504017ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.865867   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865876   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.266786   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266820   49071 pod_ready.go:81] duration metric: took 400.936146ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.266833   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266844   49071 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.666547   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666582   49071 pod_ready.go:81] duration metric: took 399.72944ms waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.666596   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666617   49071 pod_ready.go:38] duration metric: took 1.368975115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:15.666636   49071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:13:15.686675   49071 ops.go:34] apiserver oom_adj: -16
	I1024 20:13:15.686696   49071 kubeadm.go:640] restartCluster took 21.422341568s
	I1024 20:13:15.686706   49071 kubeadm.go:406] StartCluster complete in 21.486646231s
	I1024 20:13:15.686737   49071 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.686823   49071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:13:15.688903   49071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.689192   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:13:15.689321   49071 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:13:15.689405   49071 addons.go:69] Setting storage-provisioner=true in profile "no-preload-014826"
	I1024 20:13:15.689423   49071 addons.go:231] Setting addon storage-provisioner=true in "no-preload-014826"
	I1024 20:13:15.689462   49071 addons.go:69] Setting metrics-server=true in profile "no-preload-014826"
	I1024 20:13:15.689490   49071 addons.go:231] Setting addon metrics-server=true in "no-preload-014826"
	W1024 20:13:15.689512   49071 addons.go:240] addon metrics-server should already be in state true
	I1024 20:13:15.689560   49071 host.go:66] Checking if "no-preload-014826" exists ...
	W1024 20:13:15.689463   49071 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:13:15.689649   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.689445   49071 addons.go:69] Setting default-storageclass=true in profile "no-preload-014826"
	I1024 20:13:15.689716   49071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014826"
	I1024 20:13:15.689431   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:13:15.690018   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690051   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690060   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690086   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690173   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690225   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.695832   49071 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-014826" context rescaled to 1 replicas
	I1024 20:13:15.695868   49071 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:13:15.698104   49071 out.go:177] * Verifying Kubernetes components...
	I1024 20:13:15.701812   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:13:15.708637   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I1024 20:13:15.709086   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.709579   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I1024 20:13:15.709941   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.709959   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710044   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.710478   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.710629   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.710640   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710943   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.710954   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711125   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.711367   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1024 20:13:15.711702   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.711739   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711852   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.712441   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.712453   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.713081   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.713312   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.717141   49071 addons.go:231] Setting addon default-storageclass=true in "no-preload-014826"
	W1024 20:13:15.717173   49071 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:13:15.717201   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.717655   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.717688   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.729423   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I1024 20:13:15.730145   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.730747   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.730763   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.730811   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1024 20:13:15.731224   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.731294   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.731487   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.731691   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.731704   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.732239   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.732712   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.733909   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736374   49071 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:13:15.734682   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736231   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1024 20:13:15.738165   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:13:15.738178   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:13:15.738198   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739819   49071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:13:15.741717   49071 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.741733   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:13:15.741752   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739693   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.742202   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.742374   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.742389   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.742978   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.743000   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.743088   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.743253   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.743408   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.743896   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.744551   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.745028   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.745145   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745266   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.745462   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.745486   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745735   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.745870   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.745956   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.746023   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.782650   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I1024 20:13:15.783126   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.783699   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.783721   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.784051   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.784270   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.786114   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.786409   49071 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.786424   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:13:15.786439   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.788982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789347   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.789376   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789622   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.789838   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.790047   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.790195   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.870753   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:13:15.870771   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:13:15.893772   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:13:15.893799   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:13:15.916179   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.928570   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.928596   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:13:15.950610   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.987129   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.987945   49071 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:13:15.987993   49071 node_ready.go:35] waiting up to 6m0s for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53431699s)
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.499892733s)
	I1024 20:13:17.450586   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450597   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.450609   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450621   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451126   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451143   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451152   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451160   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451176   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451180   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451186   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451190   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451200   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451211   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451380   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451410   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451415   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451429   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451430   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451442   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.464276   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.464297   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.464561   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.464578   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.464585   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626276   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.639098267s)
	I1024 20:13:17.626344   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626364   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.626686   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.626711   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.626713   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626765   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626779   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.627054   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.627071   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.627082   49071 addons.go:467] Verifying addon metrics-server=true in "no-preload-014826"
	I1024 20:13:17.629289   49071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:13:17.630781   49071 addons.go:502] enable addons completed in 1.94145774s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:13:18.084997   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:13.805526   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.807970   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:18.305400   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.194668   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:17.694096   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:20.085063   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:22.086260   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:23.087300   49071 node_ready.go:49] node "no-preload-014826" has status "Ready":"True"
	I1024 20:13:23.087338   49071 node_ready.go:38] duration metric: took 7.0993157s waiting for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:23.087350   49071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:23.093785   49071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101553   49071 pod_ready.go:92] pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:23.101576   49071 pod_ready.go:81] duration metric: took 7.766543ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101588   49071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:20.808097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:23.306150   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:19.696002   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:22.195097   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.592041   50077 kubeadm.go:787] kubelet initialised
	I1024 20:13:27.592064   50077 kubeadm.go:788] duration metric: took 44.890387595s waiting for restarted kubelet to initialise ...
	I1024 20:13:27.592071   50077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:27.596611   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601949   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.601972   50077 pod_ready.go:81] duration metric: took 5.342417ms waiting for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601979   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607096   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.607118   50077 pod_ready.go:81] duration metric: took 5.132259ms waiting for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607130   50077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.611971   50077 pod_ready.go:92] pod "etcd-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.611991   50077 pod_ready.go:81] duration metric: took 4.854068ms waiting for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.612002   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.616975   50077 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.616995   50077 pod_ready.go:81] duration metric: took 4.985984ms waiting for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.617006   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620272   49071 pod_ready.go:92] pod "etcd-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.620294   49071 pod_ready.go:81] duration metric: took 1.518699618s waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620304   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625954   49071 pod_ready.go:92] pod "kube-apiserver-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.625975   49071 pod_ready.go:81] duration metric: took 5.666043ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625985   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096309   49071 pod_ready.go:92] pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.096338   49071 pod_ready.go:81] duration metric: took 2.470345358s waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096363   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101417   49071 pod_ready.go:92] pod "kube-proxy-hvphg" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.101439   49071 pod_ready.go:81] duration metric: took 5.060638ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101457   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487627   49071 pod_ready.go:92] pod "kube-scheduler-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.487655   49071 pod_ready.go:81] duration metric: took 386.189892ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487668   49071 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:25.805375   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:28.304314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:24.199489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:26.694339   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.990781   50077 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.990808   50077 pod_ready.go:81] duration metric: took 373.794401ms waiting for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.990817   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389532   50077 pod_ready.go:92] pod "kube-proxy-jdvck" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.389554   50077 pod_ready.go:81] duration metric: took 398.730628ms waiting for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389562   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791217   50077 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.791245   50077 pod_ready.go:81] duration metric: took 401.675656ms waiting for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791259   50077 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:31.101273   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.797752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.294823   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:30.305423   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.804966   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.196181   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:31.694405   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:33.597846   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.098571   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.295326   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.295502   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:35.307544   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:37.804734   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.193583   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.194545   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.693640   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.598114   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.295582   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.797360   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.303674   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:42.305932   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:41.193409   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.694630   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.097684   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.599550   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.295412   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.295801   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.795437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:44.806885   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.305513   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.695737   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.194597   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.098390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.098465   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.598464   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.796354   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.296299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.806019   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.304671   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.692678   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.693810   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.099808   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.596982   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.795042   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.795788   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.305480   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.805003   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.192666   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.192992   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.598091   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:02.097277   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.296748   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.799381   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.304665   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.305140   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.193682   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.694286   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.098871   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.598019   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.297114   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.796174   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:03.804391   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:05.805262   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.304535   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.692751   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.693756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.598278   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.598744   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:09.296355   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.794188   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.805023   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.304639   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.193179   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.696086   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.097069   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.598606   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.795184   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.797064   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.804980   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.304229   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:16.193316   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.193452   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.099418   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.597767   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.598478   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.294610   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.295299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.295580   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.304386   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.304955   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.693442   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.695298   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.598688   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.098094   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.796039   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.294583   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.804411   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:26.805975   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:25.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.194309   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.098448   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.597809   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.295004   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.296770   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.302945   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.303224   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.305333   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.693713   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.693887   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.695638   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.599337   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.098527   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.795335   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.796128   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.798347   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.307171   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.806058   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.192382   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.195932   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.098830   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.598203   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.295075   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.796827   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.304919   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.805069   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.693934   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.694102   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.598267   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.097792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:45.297437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.795616   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.805647   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:46.806849   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.695195   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.194156   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.597390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.099367   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:50.294686   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.297230   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.306571   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.804484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.194481   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.693650   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.694257   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.597760   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.597897   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.794752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.795666   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.805053   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.303997   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.304326   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.693506   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.098488   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.098937   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.297834   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.795492   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.305557   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:02.805113   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.694107   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.194559   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.597853   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.598764   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.798231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.296567   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:04.805204   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.806277   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.693959   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.194793   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.098369   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.099343   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.597632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.795941   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.295163   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:09.303880   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.308399   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.692947   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.694115   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.098788   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.297546   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.799219   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.804941   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.805508   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.805620   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.194071   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.692344   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.099461   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.598528   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:18.294855   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.795197   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.303894   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.807109   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:19.693273   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:21.694158   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.694489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:24.598739   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.610829   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.295231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.296151   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.794796   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.304009   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.304056   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:28.692475   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.097722   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.099314   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.795050   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.795981   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.304915   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.306232   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:30.693731   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.193919   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.100924   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.597972   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:37.598135   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:34.295967   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.297180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.809488   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.305924   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.696190   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.193380   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:42.597443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.794953   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.794982   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.806251   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:41.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.694041   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.192299   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:44.598402   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.097519   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.294813   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.297991   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.794454   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.803978   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.804440   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.805016   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.192754   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.693494   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.098171   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.598327   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.795988   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.296853   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.806503   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.807986   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:50.193124   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.692831   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.097085   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.600496   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.795189   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.795825   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.304728   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.305314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.696873   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:57.193194   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.098128   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.099894   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.295180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.295325   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:58.804230   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:00.804430   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.303762   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.193752   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.194280   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.694730   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.597363   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.598434   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.599790   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.295998   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.298356   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.795402   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.305076   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.805412   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:04.884378   49198 pod_ready.go:81] duration metric: took 4m0.000380407s waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:04.884408   49198 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:04.884437   49198 pod_ready.go:38] duration metric: took 4m3.201253081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:04.884459   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:04.884488   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:04.884542   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:04.941853   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:04.941878   49198 cri.go:89] found id: ""
	I1024 20:16:04.941889   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:04.941963   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.947250   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:04.947317   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:04.990126   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:04.990151   49198 cri.go:89] found id: ""
	I1024 20:16:04.990163   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:04.990226   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.995026   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:04.995086   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:05.045422   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.045441   49198 cri.go:89] found id: ""
	I1024 20:16:05.045449   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:05.045505   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.049931   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:05.049997   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:05.115746   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.115767   49198 cri.go:89] found id: ""
	I1024 20:16:05.115775   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:05.115822   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.120476   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:05.120527   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:05.163487   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.163509   49198 cri.go:89] found id: ""
	I1024 20:16:05.163521   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:05.163580   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.167956   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:05.168027   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:05.209375   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.209403   49198 cri.go:89] found id: ""
	I1024 20:16:05.209412   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:05.209468   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.213932   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:05.213994   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:05.256033   49198 cri.go:89] found id: ""
	I1024 20:16:05.256055   49198 logs.go:284] 0 containers: []
	W1024 20:16:05.256070   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:05.256077   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:05.256130   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:05.313137   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.313163   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:05.313171   49198 cri.go:89] found id: ""
	I1024 20:16:05.313181   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:05.313236   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.319603   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.324116   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:05.324138   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.364879   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:05.364905   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.430314   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:05.430342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:05.488524   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:05.488550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:05.547000   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:05.547029   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:05.561360   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:05.561392   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:05.616215   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:05.616254   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.666923   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:05.666955   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.707305   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:05.707332   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:05.865943   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:05.865972   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.914044   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:05.914070   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:06.370658   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:06.370692   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:06.423891   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:06.423919   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:10.098187   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:12.597089   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.796035   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.796300   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.805755   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.806246   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:08.967015   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:08.982371   49198 api_server.go:72] duration metric: took 4m12.675281905s to wait for apiserver process to appear ...
	I1024 20:16:08.982397   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:08.982431   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:08.982492   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:09.023557   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:09.023575   49198 cri.go:89] found id: ""
	I1024 20:16:09.023582   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:09.023626   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.029901   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:09.029954   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:09.066141   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.066169   49198 cri.go:89] found id: ""
	I1024 20:16:09.066181   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:09.066232   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.071099   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:09.071161   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:09.117898   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:09.117917   49198 cri.go:89] found id: ""
	I1024 20:16:09.117927   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:09.117979   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.122675   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:09.122729   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:09.162628   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.162647   49198 cri.go:89] found id: ""
	I1024 20:16:09.162656   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:09.162711   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.166799   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:09.166859   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:09.203866   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.203894   49198 cri.go:89] found id: ""
	I1024 20:16:09.203904   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:09.203968   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.208141   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:09.208201   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:09.252432   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.252449   49198 cri.go:89] found id: ""
	I1024 20:16:09.252457   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:09.252519   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.257709   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:09.257767   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:09.312883   49198 cri.go:89] found id: ""
	I1024 20:16:09.312908   49198 logs.go:284] 0 containers: []
	W1024 20:16:09.312919   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:09.312926   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:09.312984   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:09.365111   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:09.365138   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.365145   49198 cri.go:89] found id: ""
	I1024 20:16:09.365155   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:09.365215   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.370442   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.375055   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:09.375082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.440328   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:09.440361   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.489007   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:09.489035   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:09.539429   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:09.539467   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:09.591012   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:09.591049   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:09.608336   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:09.608362   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.656190   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:09.656216   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.704915   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:09.704942   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.743847   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:09.743878   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:10.154301   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:10.154342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:10.296525   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:10.296552   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:10.347731   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:10.347763   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:10.388130   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:10.388157   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:12.931381   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:16:12.938286   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:16:12.940208   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:12.940228   49198 api_server.go:131] duration metric: took 3.957823811s to wait for apiserver health ...
	I1024 20:16:12.940236   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:12.940255   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:12.940311   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:12.985630   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:12.985654   49198 cri.go:89] found id: ""
	I1024 20:16:12.985664   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:12.985736   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:12.991021   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:12.991094   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:13.031617   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:13.031638   49198 cri.go:89] found id: ""
	I1024 20:16:13.031647   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:13.031690   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.036956   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:13.037010   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:13.074663   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:13.074683   49198 cri.go:89] found id: ""
	I1024 20:16:13.074692   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:13.074745   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.079061   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:13.079115   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:13.122923   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.122947   49198 cri.go:89] found id: ""
	I1024 20:16:13.122957   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:13.123010   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.126914   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:13.126987   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:13.174746   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:13.174781   49198 cri.go:89] found id: ""
	I1024 20:16:13.174791   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:13.174867   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.179817   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:13.179884   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:13.228560   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.228588   49198 cri.go:89] found id: ""
	I1024 20:16:13.228606   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:13.228661   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.233182   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:13.233247   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:13.272072   49198 cri.go:89] found id: ""
	I1024 20:16:13.272100   49198 logs.go:284] 0 containers: []
	W1024 20:16:13.272110   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:13.272117   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:13.272174   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:13.317104   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:13.317129   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:13.317137   49198 cri.go:89] found id: ""
	I1024 20:16:13.317148   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:13.317208   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.327265   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.331706   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:13.331730   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.378259   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:13.378299   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:13.402257   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:13.402289   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:13.465655   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:13.465685   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.521268   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:13.521312   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:13.923501   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:13.923550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:13.976055   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:13.976082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:14.028953   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:14.028985   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:14.069859   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:14.069887   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:14.196920   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:14.196959   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:14.257588   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:14.257617   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:14.302980   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:14.303019   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:14.344441   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:14.344469   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:16.893365   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:16.893395   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.893404   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.893412   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.893419   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.893426   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.893433   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.893444   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.893456   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.893469   49198 system_pods.go:74] duration metric: took 3.953227014s to wait for pod list to return data ...
	I1024 20:16:16.893483   49198 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:16.895879   49198 default_sa.go:45] found service account: "default"
	I1024 20:16:16.895896   49198 default_sa.go:55] duration metric: took 2.405313ms for default service account to be created ...
	I1024 20:16:16.895903   49198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:16.902189   49198 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:16.902217   49198 system_pods.go:89] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.902225   49198 system_pods.go:89] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.902232   49198 system_pods.go:89] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.902240   49198 system_pods.go:89] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.902246   49198 system_pods.go:89] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.902253   49198 system_pods.go:89] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.902269   49198 system_pods.go:89] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.902281   49198 system_pods.go:89] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.902292   49198 system_pods.go:126] duration metric: took 6.383517ms to wait for k8s-apps to be running ...
	I1024 20:16:16.902303   49198 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:16.902359   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:16.920015   49198 system_svc.go:56] duration metric: took 17.706073ms WaitForService to wait for kubelet.
	I1024 20:16:16.920039   49198 kubeadm.go:581] duration metric: took 4m20.612955305s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:16.920063   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:16.924147   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:16.924167   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:16.924177   49198 node_conditions.go:105] duration metric: took 4.109839ms to run NodePressure ...
	I1024 20:16:16.924187   49198 start.go:228] waiting for startup goroutines ...
	I1024 20:16:16.924194   49198 start.go:233] waiting for cluster config update ...
	I1024 20:16:16.924206   49198 start.go:242] writing updated cluster config ...
	I1024 20:16:16.924490   49198 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:16.973588   49198 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:16.975639   49198 out.go:177] * Done! kubectl is now configured to use "embed-certs-867165" cluster and "default" namespace by default
	I1024 20:16:14.597646   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.598202   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.296652   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.795527   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.304610   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.305225   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.598694   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.099076   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.295897   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.804148   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:20.805158   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.598167   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.598533   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.598810   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.794690   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.796011   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.798006   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.803034   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:26.497612   49708 pod_ready.go:81] duration metric: took 4m0.000149915s waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:26.497657   49708 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:26.497666   49708 pod_ready.go:38] duration metric: took 4m3.599625321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:26.497682   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:26.497709   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:26.497757   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:26.569452   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:26.569479   49708 cri.go:89] found id: ""
	I1024 20:16:26.569489   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:26.569551   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.573824   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:26.573872   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:26.618910   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:26.618939   49708 cri.go:89] found id: ""
	I1024 20:16:26.618946   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:26.618998   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.623675   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:26.623723   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:26.671601   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:26.671621   49708 cri.go:89] found id: ""
	I1024 20:16:26.671628   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:26.671665   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.675997   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:26.676048   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:26.723100   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:26.723124   49708 cri.go:89] found id: ""
	I1024 20:16:26.723133   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:26.723187   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.727780   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:26.727837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:26.765584   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.765608   49708 cri.go:89] found id: ""
	I1024 20:16:26.765618   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:26.765663   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.770062   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:26.770121   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:26.811710   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:26.811728   49708 cri.go:89] found id: ""
	I1024 20:16:26.811736   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:26.811786   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.816125   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:26.816187   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:26.860427   49708 cri.go:89] found id: ""
	I1024 20:16:26.860452   49708 logs.go:284] 0 containers: []
	W1024 20:16:26.860462   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:26.860469   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:26.860532   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:26.905052   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:26.905083   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:26.905091   49708 cri.go:89] found id: ""
	I1024 20:16:26.905100   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:26.905154   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.909590   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.913618   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:26.913636   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.958127   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:26.958157   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:27.012523   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:27.012555   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:27.059311   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:27.059345   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:27.102879   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:27.102905   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:27.154377   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:27.154409   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:27.197488   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:27.197516   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:27.210530   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:27.210559   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:27.379195   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:27.379225   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:27.826087   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:27.826119   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:27.880305   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:27.880348   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:27.932382   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:27.932417   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:27.979060   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:27.979088   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:29.598843   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:31.598885   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.295090   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:32.295447   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.532134   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:30.547497   49708 api_server.go:72] duration metric: took 4m14.551629626s to wait for apiserver process to appear ...
	I1024 20:16:30.547522   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:30.547562   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:30.547627   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:30.588076   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.588097   49708 cri.go:89] found id: ""
	I1024 20:16:30.588104   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:30.588159   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.592397   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:30.592467   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:30.632362   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:30.632380   49708 cri.go:89] found id: ""
	I1024 20:16:30.632389   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:30.632446   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.636647   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:30.636695   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:30.676966   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:30.676997   49708 cri.go:89] found id: ""
	I1024 20:16:30.677005   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:30.677050   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.682153   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:30.682206   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:30.723427   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:30.723449   49708 cri.go:89] found id: ""
	I1024 20:16:30.723458   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:30.723516   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.727674   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:30.727740   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:30.774450   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:30.774473   49708 cri.go:89] found id: ""
	I1024 20:16:30.774482   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:30.774535   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.778753   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:30.778821   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:30.830068   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:30.830094   49708 cri.go:89] found id: ""
	I1024 20:16:30.830104   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:30.830169   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.835133   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:30.835201   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:30.885323   49708 cri.go:89] found id: ""
	I1024 20:16:30.885347   49708 logs.go:284] 0 containers: []
	W1024 20:16:30.885357   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:30.885363   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:30.885423   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:30.925415   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:30.925435   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:30.925440   49708 cri.go:89] found id: ""
	I1024 20:16:30.925447   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:30.925506   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.929723   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.933926   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:30.933965   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.999217   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:30.999250   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:31.051267   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:31.051300   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:31.107411   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:31.107444   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:31.233980   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:31.234009   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:31.275335   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:31.275362   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:31.329276   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:31.329316   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:31.380149   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:31.380184   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:31.393990   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:31.394016   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:31.440032   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:31.440065   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:31.478413   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:31.478445   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:31.529321   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:31.529349   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:31.578678   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:31.578708   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:33.603558   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.099473   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.295685   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.794759   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.514152   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:16:34.520578   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:16:34.522271   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:34.522289   49708 api_server.go:131] duration metric: took 3.974761353s to wait for apiserver health ...
	I1024 20:16:34.522297   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:34.522318   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:34.522363   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:34.568260   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:34.568280   49708 cri.go:89] found id: ""
	I1024 20:16:34.568287   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:34.568336   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.575356   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:34.575414   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:34.623358   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:34.623383   49708 cri.go:89] found id: ""
	I1024 20:16:34.623392   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:34.623449   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.628721   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:34.628777   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:34.675561   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:34.675583   49708 cri.go:89] found id: ""
	I1024 20:16:34.675592   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:34.675654   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.681613   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:34.681677   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:34.722858   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:34.722898   49708 cri.go:89] found id: ""
	I1024 20:16:34.722917   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:34.722974   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.727310   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:34.727376   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:34.768365   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:34.768383   49708 cri.go:89] found id: ""
	I1024 20:16:34.768390   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:34.768436   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.772776   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:34.772837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:34.825992   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:34.826020   49708 cri.go:89] found id: ""
	I1024 20:16:34.826030   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:34.826083   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.830957   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:34.831011   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:34.878138   49708 cri.go:89] found id: ""
	I1024 20:16:34.878167   49708 logs.go:284] 0 containers: []
	W1024 20:16:34.878175   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:34.878180   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:34.878235   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:34.929288   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.929321   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:34.929328   49708 cri.go:89] found id: ""
	I1024 20:16:34.929338   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:34.929391   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.933731   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.938300   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:34.938326   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.980919   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:34.980944   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:35.021465   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:35.021495   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:35.165907   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:35.165935   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:35.212733   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:35.212759   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:35.620344   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:35.620395   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:35.669555   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:35.669588   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:35.720959   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:35.720987   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:35.762823   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:35.762852   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:35.805994   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:35.806021   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:35.879019   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:35.879046   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:35.941760   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:35.941796   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:35.995475   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:35.995515   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:38.526080   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:38.526106   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.526114   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.526122   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.526128   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.526133   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.526139   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.526150   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.526159   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.526168   49708 system_pods.go:74] duration metric: took 4.003864797s to wait for pod list to return data ...
	I1024 20:16:38.526182   49708 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:38.528827   49708 default_sa.go:45] found service account: "default"
	I1024 20:16:38.528854   49708 default_sa.go:55] duration metric: took 2.662588ms for default service account to be created ...
	I1024 20:16:38.528863   49708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:38.534560   49708 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:38.534579   49708 system_pods.go:89] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.534585   49708 system_pods.go:89] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.534589   49708 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.534594   49708 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.534598   49708 system_pods.go:89] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.534602   49708 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.534610   49708 system_pods.go:89] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.534615   49708 system_pods.go:89] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.534622   49708 system_pods.go:126] duration metric: took 5.753846ms to wait for k8s-apps to be running ...
	I1024 20:16:38.534630   49708 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:38.534668   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:38.549835   49708 system_svc.go:56] duration metric: took 15.197069ms WaitForService to wait for kubelet.
	I1024 20:16:38.549856   49708 kubeadm.go:581] duration metric: took 4m22.553994431s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:38.549878   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:38.553043   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:38.553065   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:38.553076   49708 node_conditions.go:105] duration metric: took 3.193057ms to run NodePressure ...
	I1024 20:16:38.553086   49708 start.go:228] waiting for startup goroutines ...
	I1024 20:16:38.553091   49708 start.go:233] waiting for cluster config update ...
	I1024 20:16:38.553100   49708 start.go:242] writing updated cluster config ...
	I1024 20:16:38.553348   49708 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:38.601183   49708 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:38.603463   49708 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643126" cluster and "default" namespace by default
	I1024 20:16:38.597848   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:40.599437   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:38.795772   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:41.293845   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.096749   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.097165   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:47.097443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.298644   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.797003   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:49.097716   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:51.597754   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:48.295110   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:50.796361   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.600174   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:56.097860   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.295856   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:55.295890   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:57.795597   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:58.097890   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:00.598554   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:59.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:02.295268   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:03.098362   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:05.596632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:04.296575   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:06.296820   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.098450   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:10.597828   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:12.599199   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.795717   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:11.296662   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.097014   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.097844   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:13.794373   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.795134   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.795531   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.098039   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:21.098582   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.796588   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:22.296536   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:23.597792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.098066   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:24.795501   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.796240   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:27.488206   49071 pod_ready.go:81] duration metric: took 4m0.000518995s waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:27.488255   49071 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:27.488267   49071 pod_ready.go:38] duration metric: took 4m4.400905907s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:27.488288   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:17:27.488320   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:27.488379   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:27.544995   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:27.545022   49071 cri.go:89] found id: ""
	I1024 20:17:27.545033   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:27.545116   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.550068   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:27.550127   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:27.595184   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:27.595207   49071 cri.go:89] found id: ""
	I1024 20:17:27.595215   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:27.595265   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.600016   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:27.600075   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:27.644222   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:27.644254   49071 cri.go:89] found id: ""
	I1024 20:17:27.644265   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:27.644321   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.654982   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:27.655048   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:27.697751   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:27.697773   49071 cri.go:89] found id: ""
	I1024 20:17:27.697783   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:27.697838   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.701909   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:27.701969   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:27.746060   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:27.746085   49071 cri.go:89] found id: ""
	I1024 20:17:27.746094   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:27.746147   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.750335   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:27.750392   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:27.791948   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:27.791973   49071 cri.go:89] found id: ""
	I1024 20:17:27.791981   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:27.792045   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.796535   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:27.796616   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:27.839648   49071 cri.go:89] found id: ""
	I1024 20:17:27.839675   49071 logs.go:284] 0 containers: []
	W1024 20:17:27.839683   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:27.839689   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:27.839750   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:27.889284   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.889327   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:27.889334   49071 cri.go:89] found id: ""
	I1024 20:17:27.889343   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:27.889404   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.893661   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.897791   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:27.897819   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.941335   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:27.941369   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:27.954378   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:27.954409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:28.115760   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:28.115792   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:28.171378   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:28.171409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:28.211591   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:28.211620   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:28.247491   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247676   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247811   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247961   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:28.268681   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:28.268717   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:28.099979   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:28.791972   50077 pod_ready.go:81] duration metric: took 4m0.000695315s waiting for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:28.792005   50077 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:28.792032   50077 pod_ready.go:38] duration metric: took 4m1.199949971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:28.792069   50077 kubeadm.go:640] restartCluster took 5m7.653001653s
	W1024 20:17:28.792133   50077 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 20:17:28.792173   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1024 20:17:28.321382   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:28.321413   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:28.364236   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:28.364260   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:28.840985   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:28.841028   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:28.896806   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:28.896846   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:28.948487   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:28.948520   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:28.993469   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:28.993500   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:29.052064   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052102   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:29.052154   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:29.052165   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052174   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052180   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052186   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:29.052191   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052196   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.598790   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.806587354s)
	I1024 20:17:33.598873   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:17:33.614594   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:17:33.625146   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:17:33.635420   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:17:33.635486   50077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 20:17:33.858680   50077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:17:39.053169   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:17:39.069883   49071 api_server.go:72] duration metric: took 4m23.373979574s to wait for apiserver process to appear ...
	I1024 20:17:39.069910   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:17:39.069953   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:39.070015   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:39.116676   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:39.116696   49071 cri.go:89] found id: ""
	I1024 20:17:39.116703   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:39.116752   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.121745   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:39.121814   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:39.174897   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.174932   49071 cri.go:89] found id: ""
	I1024 20:17:39.174943   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:39.175002   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.180933   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:39.181003   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:39.239666   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.239691   49071 cri.go:89] found id: ""
	I1024 20:17:39.239701   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:39.239754   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.244270   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:39.244328   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:39.285405   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.285432   49071 cri.go:89] found id: ""
	I1024 20:17:39.285443   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:39.285503   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.290326   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:39.290393   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:39.330723   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:39.330751   49071 cri.go:89] found id: ""
	I1024 20:17:39.330761   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:39.330816   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.335850   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:39.335917   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:39.375354   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:39.375377   49071 cri.go:89] found id: ""
	I1024 20:17:39.375387   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:39.375449   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.380243   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:39.380313   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:39.424841   49071 cri.go:89] found id: ""
	I1024 20:17:39.424875   49071 logs.go:284] 0 containers: []
	W1024 20:17:39.424885   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:39.424892   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:39.424950   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:39.464134   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.464153   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:39.464160   49071 cri.go:89] found id: ""
	I1024 20:17:39.464168   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:39.464224   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.468810   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.473093   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:39.473128   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:39.507113   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507292   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507432   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507588   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:39.530433   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:39.530479   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:39.666739   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:39.666765   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.710505   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:39.710538   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.749917   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:39.749946   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.799168   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:39.799196   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.846346   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:39.846377   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:40.273032   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:40.273065   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:40.320491   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:40.320521   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:40.378356   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:40.378395   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:40.421618   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:40.421647   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:40.466303   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:40.466334   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:40.478941   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:40.478966   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:40.544618   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544642   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:40.544694   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:40.544706   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544718   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544725   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544733   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:40.544739   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544747   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:46.481686   50077 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1024 20:17:46.481762   50077 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 20:17:46.481861   50077 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:17:46.482000   50077 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:17:46.482104   50077 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:17:46.482236   50077 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:17:46.482362   50077 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:17:46.482486   50077 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1024 20:17:46.482538   50077 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:17:46.484150   50077 out.go:204]   - Generating certificates and keys ...
	I1024 20:17:46.484246   50077 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 20:17:46.484315   50077 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 20:17:46.484402   50077 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 20:17:46.484509   50077 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 20:17:46.484603   50077 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 20:17:46.484689   50077 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 20:17:46.484778   50077 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 20:17:46.484870   50077 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 20:17:46.484972   50077 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 20:17:46.485069   50077 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 20:17:46.485123   50077 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 20:17:46.485200   50077 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:17:46.485263   50077 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:17:46.485343   50077 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:17:46.485430   50077 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:17:46.485503   50077 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:17:46.485590   50077 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:17:46.487065   50077 out.go:204]   - Booting up control plane ...
	I1024 20:17:46.487158   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:17:46.487219   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:17:46.487291   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:17:46.487401   50077 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:17:46.487551   50077 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:17:46.487623   50077 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003664 seconds
	I1024 20:17:46.487756   50077 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:17:46.487882   50077 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:17:46.487940   50077 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:17:46.488123   50077 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-467375 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 20:17:46.488199   50077 kubeadm.go:322] [bootstrap-token] Using token: axp9sy.xsem3c8nzt72b18p
	I1024 20:17:46.490507   50077 out.go:204]   - Configuring RBAC rules ...
	I1024 20:17:46.490603   50077 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:17:46.490719   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:17:46.490832   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:17:46.490938   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:17:46.491009   50077 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:17:46.491044   50077 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 20:17:46.491083   50077 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 20:17:46.491091   50077 kubeadm.go:322] 
	I1024 20:17:46.491151   50077 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 20:17:46.491163   50077 kubeadm.go:322] 
	I1024 20:17:46.491224   50077 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 20:17:46.491231   50077 kubeadm.go:322] 
	I1024 20:17:46.491260   50077 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 20:17:46.491346   50077 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:17:46.491409   50077 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:17:46.491419   50077 kubeadm.go:322] 
	I1024 20:17:46.491511   50077 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 20:17:46.491621   50077 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:17:46.491715   50077 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:17:46.491725   50077 kubeadm.go:322] 
	I1024 20:17:46.491829   50077 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1024 20:17:46.491929   50077 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 20:17:46.491937   50077 kubeadm.go:322] 
	I1024 20:17:46.492064   50077 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492249   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 20:17:46.492292   50077 kubeadm.go:322]     --control-plane 	  
	I1024 20:17:46.492302   50077 kubeadm.go:322] 
	I1024 20:17:46.492423   50077 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:17:46.492435   50077 kubeadm.go:322] 
	I1024 20:17:46.492532   50077 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492675   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 20:17:46.492686   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:17:46.492694   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:17:46.494152   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:17:46.495677   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:17:46.510639   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:17:46.539872   50077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:17:46.539933   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.539945   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=old-k8s-version-467375 minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.984338   50077 ops.go:34] apiserver oom_adj: -16
	I1024 20:17:46.984391   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.163022   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.798557   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.298499   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.798506   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.298076   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.798120   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.298504   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.298777   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.798477   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.298309   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.798243   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.546645   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:17:50.552245   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:17:50.553721   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:17:50.553747   49071 api_server.go:131] duration metric: took 11.483829454s to wait for apiserver health ...
	I1024 20:17:50.553757   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:17:50.553784   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:50.553844   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:50.594504   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:50.594528   49071 cri.go:89] found id: ""
	I1024 20:17:50.594536   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:50.594586   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.598912   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:50.598963   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:50.644339   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:50.644355   49071 cri.go:89] found id: ""
	I1024 20:17:50.644362   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:50.644406   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.649046   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:50.649099   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:50.688245   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:50.688268   49071 cri.go:89] found id: ""
	I1024 20:17:50.688278   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:50.688330   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.692382   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:50.692429   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:50.736359   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:50.736384   49071 cri.go:89] found id: ""
	I1024 20:17:50.736393   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:50.736451   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.741226   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:50.741287   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:50.797894   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:50.797920   49071 cri.go:89] found id: ""
	I1024 20:17:50.797930   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:50.797997   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.802725   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:50.802781   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:50.851081   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:50.851106   49071 cri.go:89] found id: ""
	I1024 20:17:50.851115   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:50.851166   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.855549   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:50.855600   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:50.909237   49071 cri.go:89] found id: ""
	I1024 20:17:50.909265   49071 logs.go:284] 0 containers: []
	W1024 20:17:50.909276   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:50.909283   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:50.909355   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:50.958541   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:50.958567   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:50.958574   49071 cri.go:89] found id: ""
	I1024 20:17:50.958583   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:50.958638   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.962947   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.967261   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:50.967283   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:51.087158   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:51.087190   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:51.144421   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:51.144458   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:51.200040   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:51.200072   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:51.255703   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:51.255740   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:51.683831   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:51.683869   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:51.726821   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:51.726856   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:51.776977   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:51.777006   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:51.822826   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:51.822861   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:51.873557   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.873838   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874063   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874313   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:51.900648   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:51.900690   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:51.916123   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:51.916161   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:51.960440   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:51.960470   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:52.010020   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:52.010051   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:52.051039   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051063   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:52.051113   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:52.051127   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051142   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051162   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051173   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:52.051183   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051190   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:53.298168   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:53.798546   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.298175   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.798534   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.298510   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.798562   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.297914   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.797930   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.298527   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.298630   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.798550   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.298526   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.798537   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.298538   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.798072   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:01.014502   50077 kubeadm.go:1081] duration metric: took 14.474620601s to wait for elevateKubeSystemPrivileges.
	I1024 20:18:01.014547   50077 kubeadm.go:406] StartCluster complete in 5m39.9402605s
	I1024 20:18:01.014569   50077 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.014667   50077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:18:01.017210   50077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.017539   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:18:01.017574   50077 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:18:01.017659   50077 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017666   50077 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017677   50077 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-467375"
	W1024 20:18:01.017690   50077 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:18:01.017695   50077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-467375"
	I1024 20:18:01.017699   50077 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017718   50077 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-467375"
	W1024 20:18:01.017727   50077 addons.go:240] addon metrics-server should already be in state true
	I1024 20:18:01.017731   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017777   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017816   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:18:01.018053   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018088   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018111   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018122   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018149   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018257   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.036179   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1024 20:18:01.036834   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.037477   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.037504   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.037665   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I1024 20:18:01.037824   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I1024 20:18:01.037912   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.038074   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.038220   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038306   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038850   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.038867   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039010   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.039021   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039391   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039410   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039925   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.039949   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.039974   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.040014   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.041243   50077 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-467375"
	W1024 20:18:01.041258   50077 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:18:01.041277   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.041611   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.041645   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.056254   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1024 20:18:01.056888   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.057215   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I1024 20:18:01.057487   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.057502   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.057895   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.057956   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.058536   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.058574   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.058844   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.058857   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.058929   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I1024 20:18:01.059172   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.059288   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.059451   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.059964   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.059975   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.060353   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.060565   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.061107   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.062802   50077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:18:01.064189   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:18:01.064209   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:18:01.064230   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.062154   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.066082   50077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:18:01.067046   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.067880   50077 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.067901   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:18:01.067921   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.068400   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.068432   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.069073   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.069343   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.069484   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.069587   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.071678   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072196   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.072220   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072596   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.072776   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.072905   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.073043   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.079576   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I1024 20:18:01.080025   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.080592   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.080613   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.081035   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.081240   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.083090   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.083404   50077 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.083425   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:18:01.083443   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.086433   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.086802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.086824   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.087003   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.087198   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.087348   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.087506   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.197205   50077 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-467375" context rescaled to 1 replicas
	I1024 20:18:01.197249   50077 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:18:01.200328   50077 out.go:177] * Verifying Kubernetes components...
	I1024 20:18:02.061971   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:18:02.062015   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.062024   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.062031   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.062040   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.062047   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.062054   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.062066   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.062078   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.062086   49071 system_pods.go:74] duration metric: took 11.508322005s to wait for pod list to return data ...
	I1024 20:18:02.062098   49071 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:02.065560   49071 default_sa.go:45] found service account: "default"
	I1024 20:18:02.065585   49071 default_sa.go:55] duration metric: took 3.476366ms for default service account to be created ...
	I1024 20:18:02.065595   49071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:02.073224   49071 system_pods.go:86] 8 kube-system pods found
	I1024 20:18:02.073253   49071 system_pods.go:89] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.073262   49071 system_pods.go:89] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.073269   49071 system_pods.go:89] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.073277   49071 system_pods.go:89] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.073284   49071 system_pods.go:89] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.073290   49071 system_pods.go:89] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.073313   49071 system_pods.go:89] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.073326   49071 system_pods.go:89] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.073335   49071 system_pods.go:126] duration metric: took 7.733883ms to wait for k8s-apps to be running ...
	I1024 20:18:02.073346   49071 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:18:02.073405   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:02.093085   49071 system_svc.go:56] duration metric: took 19.727658ms WaitForService to wait for kubelet.
	I1024 20:18:02.093113   49071 kubeadm.go:581] duration metric: took 4m46.397215509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:18:02.093135   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:18:02.101982   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:18:02.102007   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:18:02.102018   49071 node_conditions.go:105] duration metric: took 8.878046ms to run NodePressure ...
	I1024 20:18:02.102035   49071 start.go:228] waiting for startup goroutines ...
	I1024 20:18:02.102041   49071 start.go:233] waiting for cluster config update ...
	I1024 20:18:02.102054   49071 start.go:242] writing updated cluster config ...
	I1024 20:18:02.102767   49071 ssh_runner.go:195] Run: rm -f paused
	I1024 20:18:02.159693   49071 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:18:02.161831   49071 out.go:177] * Done! kubectl is now configured to use "no-preload-014826" cluster and "default" namespace by default
	I1024 20:18:01.201778   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:01.315241   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.335753   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.339160   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:18:01.339182   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:18:01.376704   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:18:01.376726   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:18:01.385150   50077 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.385223   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 20:18:01.443957   50077 node_ready.go:49] node "old-k8s-version-467375" has status "Ready":"True"
	I1024 20:18:01.443978   50077 node_ready.go:38] duration metric: took 58.799937ms waiting for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.443987   50077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:01.453968   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:01.453998   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:18:01.481599   50077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:01.543065   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:02.715998   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.400725332s)
	I1024 20:18:02.716049   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716062   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716066   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.38027937s)
	I1024 20:18:02.716103   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716120   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716152   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.330913087s)
	I1024 20:18:02.716170   50077 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 20:18:02.716377   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716392   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716402   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716410   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716512   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.716522   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716536   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716547   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716557   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716623   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716637   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.717532   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.717534   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.717554   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.790444   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.790480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.790901   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.790925   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895176   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.352065096s)
	I1024 20:18:02.895243   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895268   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895611   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895630   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895634   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.895639   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895875   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895888   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895905   50077 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-467375"
	I1024 20:18:02.897664   50077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:18:02.899508   50077 addons.go:502] enable addons completed in 1.881940564s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:18:03.719917   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:06.207388   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:08.207967   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:10.708258   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:12.208133   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.208155   50077 pod_ready.go:81] duration metric: took 10.726531733s waiting for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.208166   50077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213213   50077 pod_ready.go:92] pod "kube-proxy-9bpht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.213237   50077 pod_ready.go:81] duration metric: took 5.063943ms waiting for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213247   50077 pod_ready.go:38] duration metric: took 10.769249135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:12.213267   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:18:12.213344   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:18:12.228272   50077 api_server.go:72] duration metric: took 11.030986098s to wait for apiserver process to appear ...
	I1024 20:18:12.228295   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:18:12.228313   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:18:12.234663   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:18:12.235584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:18:12.235599   50077 api_server.go:131] duration metric: took 7.297294ms to wait for apiserver health ...
	I1024 20:18:12.235605   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:18:12.239203   50077 system_pods.go:59] 4 kube-system pods found
	I1024 20:18:12.239228   50077 system_pods.go:61] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.239235   50077 system_pods.go:61] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.239246   50077 system_pods.go:61] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.239292   50077 system_pods.go:61] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.239307   50077 system_pods.go:74] duration metric: took 3.696523ms to wait for pod list to return data ...
	I1024 20:18:12.239315   50077 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:12.242065   50077 default_sa.go:45] found service account: "default"
	I1024 20:18:12.242080   50077 default_sa.go:55] duration metric: took 2.760528ms for default service account to be created ...
	I1024 20:18:12.242086   50077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:12.245602   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.245624   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.245631   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.245640   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.245648   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.245664   50077 retry.go:31] will retry after 287.935783ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.538837   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.538900   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.538924   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.538942   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.538955   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.538979   50077 retry.go:31] will retry after 320.680304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.864800   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.864826   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.864832   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.864838   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.864844   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.864858   50077 retry.go:31] will retry after 364.04425ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.233903   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.233927   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.233934   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.233941   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.233946   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.233974   50077 retry.go:31] will retry after 559.821457ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.799208   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.799234   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.799240   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.799246   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.799252   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.799266   50077 retry.go:31] will retry after 522.263157ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.325735   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.325767   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.325776   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.325789   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.325799   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.325817   50077 retry.go:31] will retry after 668.137602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.999589   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.999614   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.999620   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.999626   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.999632   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.999646   50077 retry.go:31] will retry after 859.983274ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:15.865531   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:15.865556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:15.865561   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:15.865568   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:15.865573   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:15.865589   50077 retry.go:31] will retry after 1.238765858s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:17.109999   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:17.110023   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:17.110028   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:17.110035   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:17.110041   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:17.110054   50077 retry.go:31] will retry after 1.485428629s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:18.600612   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:18.600635   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:18.600641   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:18.600647   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:18.600652   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:18.600665   50077 retry.go:31] will retry after 2.290652681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:20.897529   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:20.897556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:20.897562   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:20.897571   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:20.897577   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:20.897593   50077 retry.go:31] will retry after 2.367552906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:23.270766   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:23.270792   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:23.270800   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:23.270810   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:23.270817   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:23.270834   50077 retry.go:31] will retry after 2.861357376s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:26.136663   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:26.136696   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:26.136704   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:26.136715   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:26.136725   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:26.136743   50077 retry.go:31] will retry after 3.526737387s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:29.670148   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:29.670175   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:29.670181   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:29.670188   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:29.670195   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:29.670215   50077 retry.go:31] will retry after 5.450931485s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:35.125964   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:35.125989   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:35.125994   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:35.126001   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:35.126007   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:35.126022   50077 retry.go:31] will retry after 5.914408322s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:41.046649   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:41.046670   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:41.046677   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:41.046684   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:41.046690   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:41.046704   50077 retry.go:31] will retry after 6.748980526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:47.802189   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:47.802212   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:47.802217   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:47.802225   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:47.802230   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:47.802244   50077 retry.go:31] will retry after 8.662562452s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:56.471025   50077 system_pods.go:86] 7 kube-system pods found
	I1024 20:18:56.471062   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:56.471071   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:18:56.471079   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:18:56.471086   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:56.471094   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Pending
	I1024 20:18:56.471108   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:56.471121   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:56.471142   50077 retry.go:31] will retry after 10.356793998s: missing components: etcd, kube-scheduler
	I1024 20:19:06.834711   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:06.834741   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:06.834749   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Pending
	I1024 20:19:06.834755   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:06.834762   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:06.834767   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:06.834772   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:06.834782   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:06.834792   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:06.834809   50077 retry.go:31] will retry after 14.609583217s: missing components: etcd
	I1024 20:19:21.450651   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:21.450678   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:21.450685   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Running
	I1024 20:19:21.450689   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:21.450693   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:21.450699   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:21.450709   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:21.450719   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:21.450732   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:21.450745   50077 system_pods.go:126] duration metric: took 1m9.20865321s to wait for k8s-apps to be running ...
	I1024 20:19:21.450757   50077 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:19:21.450800   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:19:21.468030   50077 system_svc.go:56] duration metric: took 17.254248ms WaitForService to wait for kubelet.
	I1024 20:19:21.468061   50077 kubeadm.go:581] duration metric: took 1m20.270780436s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:19:21.468089   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:19:21.471958   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:19:21.471982   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:19:21.471993   50077 node_conditions.go:105] duration metric: took 3.898893ms to run NodePressure ...
	I1024 20:19:21.472003   50077 start.go:228] waiting for startup goroutines ...
	I1024 20:19:21.472008   50077 start.go:233] waiting for cluster config update ...
	I1024 20:19:21.472018   50077 start.go:242] writing updated cluster config ...
	I1024 20:19:21.472257   50077 ssh_runner.go:195] Run: rm -f paused
	I1024 20:19:21.520082   50077 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1024 20:19:21.522545   50077 out.go:177] 
	W1024 20:19:21.524125   50077 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1024 20:19:21.525515   50077 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1024 20:19:21.527113   50077 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-467375" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:11:37 UTC, ends at Tue 2023-10-24 20:25:40 UTC. --
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.311440460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179140311423599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eab0349a-84e5-4cd0-878b-aeb1509ff03d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.312326191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56eebbef-641b-43f5-b484-b95060cd1d6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.312374573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56eebbef-641b-43f5-b484-b95060cd1d6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.312559567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56eebbef-641b-43f5-b484-b95060cd1d6e name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.353071979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7dcb056b-d7f0-434f-9314-75a2a88433a2 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.353152041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7dcb056b-d7f0-434f-9314-75a2a88433a2 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.355572021Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=16421b5a-a23e-4337-bb35-5886c8a0a81b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.355952961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179140355939787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=16421b5a-a23e-4337-bb35-5886c8a0a81b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.356854598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=354acd92-5b94-4e7e-b5b6-3cd73b2843dc name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.356901322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=354acd92-5b94-4e7e-b5b6-3cd73b2843dc name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.357175629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=354acd92-5b94-4e7e-b5b6-3cd73b2843dc name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.393555999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=10dfff46-21ae-4b98-9391-b32a5acfbb86 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.393613584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=10dfff46-21ae-4b98-9391-b32a5acfbb86 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.394958820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4d3e6ade-774d-476e-a14a-d29ca5aa39d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.395487186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179140395472416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4d3e6ade-774d-476e-a14a-d29ca5aa39d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.396269130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=862f9923-7226-4e06-9136-be18f164b103 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.396311799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=862f9923-7226-4e06-9136-be18f164b103 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.396529336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=862f9923-7226-4e06-9136-be18f164b103 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.430603317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c97fbbb2-145b-44d5-ac6c-899e51328860 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.430661912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c97fbbb2-145b-44d5-ac6c-899e51328860 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.432317273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4fea0152-a148-4b3a-858e-cadd86160ae9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.432864197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179140432845382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4fea0152-a148-4b3a-858e-cadd86160ae9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.435487520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a7c0e821-9a3c-4a26-9d39-5ccabff0a2b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.435575710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a7c0e821-9a3c-4a26-9d39-5ccabff0a2b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:25:40 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:25:40.435846708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a7c0e821-9a3c-4a26-9d39-5ccabff0a2b3 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0198578b96c6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   fb5a41cb7e246       storage-provisioner
	2a0fca5db1c6e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   2068401dd05a9       busybox
	5520a46163d9a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   3e8a8afb8a5e5       coredns-5dd5756b68-mklhw
	94c1196dd672c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   fb5a41cb7e246       storage-provisioner
	4c95bbf4f285b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      13 minutes ago      Running             kube-proxy                1                   a44b1838edc1b       kube-proxy-x4zbh
	742064a59716b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      13 minutes ago      Running             kube-scheduler            1                   38c35866ffc89       kube-scheduler-default-k8s-diff-port-643126
	7e5201f16577b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      13 minutes ago      Running             kube-controller-manager   1                   d22c3191e5524       kube-controller-manager-default-k8s-diff-port-643126
	cc891cea4cf91       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      13 minutes ago      Running             kube-apiserver            1                   de6c3901c21d6       kube-apiserver-default-k8s-diff-port-643126
	297b00416e9d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   9333f19493abf       etcd-default-k8s-diff-port-643126
	
	* 
	* ==> coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45270 - 44506 "HINFO IN 4684813267403133358.2973808512917307922. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010024308s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-643126
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-643126
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=default-k8s-diff-port-643126
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_04_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:04:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-643126
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:25:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:22:55 +0000   Tue, 24 Oct 2023 20:04:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:22:55 +0000   Tue, 24 Oct 2023 20:04:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:22:55 +0000   Tue, 24 Oct 2023 20:04:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:22:55 +0000   Tue, 24 Oct 2023 20:12:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.148
	  Hostname:    default-k8s-diff-port-643126
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b71eed24e60a4ca1869c2bb0fec81460
	  System UUID:                b71eed24-e60a-4ca1-869c-2bb0fec81460
	  Boot ID:                    d3527ccf-a3b5-4214-80ca-d143812274e4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-mklhw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-643126                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-643126             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-643126    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-x4zbh                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-643126             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-lmxdt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-643126 event: Registered Node default-k8s-diff-port-643126 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-643126 event: Registered Node default-k8s-diff-port-643126 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076321] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.429048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.161299] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138068] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.499490] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.257405] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.116874] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.164731] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.113474] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.258182] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[Oct24 20:12] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +14.990963] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] <==
	* {"level":"info","ts":"2023-10-24T20:12:09.091839Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf942be0a1301ad","local-member-id":"d94a8047b7882d6e","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:12:09.091886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:12:10.675577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T20:12:10.675622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T20:12:10.67565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e received MsgPreVoteResp from d94a8047b7882d6e at term 2"}
	{"level":"info","ts":"2023-10-24T20:12:10.675663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T20:12:10.675669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e received MsgVoteResp from d94a8047b7882d6e at term 3"}
	{"level":"info","ts":"2023-10-24T20:12:10.675677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94a8047b7882d6e became leader at term 3"}
	{"level":"info","ts":"2023-10-24T20:12:10.675687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94a8047b7882d6e elected leader d94a8047b7882d6e at term 3"}
	{"level":"info","ts":"2023-10-24T20:12:10.677483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:12:10.677634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:12:10.678655Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.148:2379"}
	{"level":"info","ts":"2023-10-24T20:12:10.678976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T20:12:10.677488Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d94a8047b7882d6e","local-member-attributes":"{Name:default-k8s-diff-port-643126 ClientURLs:[https://192.168.61.148:2379]}","request-path":"/0/members/d94a8047b7882d6e/attributes","cluster-id":"8cf942be0a1301ad","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T20:12:10.678985Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T20:12:10.679182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T20:12:19.786397Z","caller":"traceutil/trace.go:171","msg":"trace[2117280150] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"121.702283ms","start":"2023-10-24T20:12:19.664661Z","end":"2023-10-24T20:12:19.786363Z","steps":["trace[2117280150] 'process raft request'  (duration: 121.556468ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:12:19.914383Z","caller":"traceutil/trace.go:171","msg":"trace[1554939137] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"123.022748ms","start":"2023-10-24T20:12:19.791338Z","end":"2023-10-24T20:12:19.914361Z","steps":["trace[1554939137] 'process raft request'  (duration: 38.987733ms)","trace[1554939137] 'compare'  (duration: 83.456945ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T20:12:20.204867Z","caller":"traceutil/trace.go:171","msg":"trace[877039603] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"227.267566ms","start":"2023-10-24T20:12:19.977579Z","end":"2023-10-24T20:12:20.204847Z","steps":["trace[877039603] 'read index received'  (duration: 208.97316ms)","trace[877039603] 'applied index is now lower than readState.Index'  (duration: 18.293243ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T20:12:20.205103Z","caller":"traceutil/trace.go:171","msg":"trace[778972609] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"285.420989ms","start":"2023-10-24T20:12:19.919493Z","end":"2023-10-24T20:12:20.204914Z","steps":["trace[778972609] 'process raft request'  (duration: 267.096059ms)","trace[778972609] 'compare'  (duration: 17.828492ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:12:20.205188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.631177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-lmxdt\" ","response":"range_response_count:1 size:3866"}
	{"level":"info","ts":"2023-10-24T20:12:20.205315Z","caller":"traceutil/trace.go:171","msg":"trace[1729154886] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-lmxdt; range_end:; response_count:1; response_revision:577; }","duration":"227.802907ms","start":"2023-10-24T20:12:19.977501Z","end":"2023-10-24T20:12:20.205304Z","steps":["trace[1729154886] 'agreement among raft nodes before linearized reading'  (duration: 227.560793ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:22:10.709627Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2023-10-24T20:22:10.712308Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":851,"took":"2.380512ms","hash":391525199}
	{"level":"info","ts":"2023-10-24T20:22:10.712368Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":391525199,"revision":851,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:25:40 up 14 min,  0 users,  load average: 0.05, 0.33, 0.25
	Linux default-k8s-diff-port-643126 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] <==
	* I1024 20:22:12.411114       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:22:13.411325       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:22:13.411413       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:22:13.411426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:22:13.411530       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:22:13.411678       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:22:13.412759       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:23:12.269648       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:23:13.412174       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:23:13.412305       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:23:13.412340       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:23:13.413333       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:23:13.413431       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:23:13.413438       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:24:12.269868       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 20:25:12.269507       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:25:13.412453       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:25:13.412591       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:25:13.412667       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:25:13.414090       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:25:13.414284       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:25:13.414348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] <==
	* I1024 20:19:55.651813       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:20:25.051489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:20:25.662325       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:20:55.059524       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:20:55.671141       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:21:25.067858       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:21:25.680814       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:21:55.074796       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:21:55.691899       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:22:25.080651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:22:25.700081       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:22:55.088336       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:22:55.708458       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:23:25.094270       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:23:25.718845       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:23:34.920338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="420.483µs"
	I1024 20:23:46.916696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="219.805µs"
	E1024 20:23:55.099708       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:23:55.728166       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:24:25.105127       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:24:25.740756       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:24:55.110152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:24:55.750567       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:25:25.117307       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:25:25.767562       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] <==
	* I1024 20:12:14.462772       1 server_others.go:69] "Using iptables proxy"
	I1024 20:12:14.531364       1 node.go:141] Successfully retrieved node IP: 192.168.61.148
	I1024 20:12:14.812227       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:12:14.812274       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:12:14.816273       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:12:14.816411       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:12:14.817087       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:12:14.817651       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:12:14.818749       1 config.go:188] "Starting service config controller"
	I1024 20:12:14.818799       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:12:14.818820       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:12:14.818823       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:12:14.821818       1 config.go:315] "Starting node config controller"
	I1024 20:12:14.821852       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:12:14.919951       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:12:14.920114       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:12:14.923109       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] <==
	* I1024 20:12:09.445279       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:12:12.355464       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:12:12.355542       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:12:12.355570       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:12:12.355594       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:12:12.440118       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:12:12.440206       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:12:12.446848       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:12:12.446962       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:12:12.448347       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:12:12.446978       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:12:12.549301       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:11:37 UTC, ends at Tue 2023-10-24 20:25:41 UTC. --
	Oct 24 20:23:05 default-k8s-diff-port-643126 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:23:05 default-k8s-diff-port-643126 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:23:06 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:06.900200     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:23:20 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:20.910811     931 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:23:20 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:20.910867     931 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:23:20 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:20.911193     931 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zgrtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-lmxdt_kube-system(9b235003-ac4a-491b-af2e-9af54e79922c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:23:20 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:20.911247     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:23:34 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:34.900550     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:23:46 default-k8s-diff-port-643126 kubelet[931]: E1024 20:23:46.899650     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:24:01 default-k8s-diff-port-643126 kubelet[931]: E1024 20:24:01.900388     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:24:05 default-k8s-diff-port-643126 kubelet[931]: E1024 20:24:05.921941     931 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:24:05 default-k8s-diff-port-643126 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:24:05 default-k8s-diff-port-643126 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:24:05 default-k8s-diff-port-643126 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:24:15 default-k8s-diff-port-643126 kubelet[931]: E1024 20:24:15.899938     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:24:30 default-k8s-diff-port-643126 kubelet[931]: E1024 20:24:30.899895     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:24:41 default-k8s-diff-port-643126 kubelet[931]: E1024 20:24:41.901187     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:24:52 default-k8s-diff-port-643126 kubelet[931]: E1024 20:24:52.899533     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:25:05 default-k8s-diff-port-643126 kubelet[931]: E1024 20:25:05.900218     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:25:06 default-k8s-diff-port-643126 kubelet[931]: E1024 20:25:06.022137     931 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:25:06 default-k8s-diff-port-643126 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:25:06 default-k8s-diff-port-643126 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:25:06 default-k8s-diff-port-643126 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:25:17 default-k8s-diff-port-643126 kubelet[931]: E1024 20:25:17.903265     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:25:28 default-k8s-diff-port-643126 kubelet[931]: E1024 20:25:28.899656     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	
	* 
	* ==> storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] <==
	* I1024 20:12:45.332861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:12:45.351280       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:12:45.351457       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:13:02.753900       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:13:02.754214       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643126_4be52ba6-9b59-46c1-96ca-19a76a5b2a3d!
	I1024 20:13:02.756690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38aa9f7b-a64f-4486-8c9a-e6ebab2efbcb", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-643126_4be52ba6-9b59-46c1-96ca-19a76a5b2a3d became leader
	I1024 20:13:02.854462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643126_4be52ba6-9b59-46c1-96ca-19a76a5b2a3d!
	
	* 
	* ==> storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] <==
	* I1024 20:12:14.585865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 20:12:44.595803       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-lmxdt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 describe pod metrics-server-57f55c9bc5-lmxdt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-643126 describe pod metrics-server-57f55c9bc5-lmxdt: exit status 1 (74.239401ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-lmxdt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-643126 describe pod metrics-server-57f55c9bc5-lmxdt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1024 20:18:10.558545   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 20:18:19.104536   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014826 -n no-preload-014826
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:27:02.787008039 +0000 UTC m=+5184.552487045
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-014826 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-014826 logs -n 25: (1.646548569s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-636215                                        | pause-636215                 | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-145190                              | stopped-upgrade-145190       | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:09:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:09:32.850310   50077 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:09:32.850450   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850462   50077 out.go:309] Setting ErrFile to fd 2...
	I1024 20:09:32.850470   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850632   50077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:09:32.851152   50077 out.go:303] Setting JSON to false
	I1024 20:09:32.851985   50077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6471,"bootTime":1698171702,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:09:32.852046   50077 start.go:138] virtualization: kvm guest
	I1024 20:09:32.854420   50077 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:09:32.855945   50077 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:09:32.855955   50077 notify.go:220] Checking for updates...
	I1024 20:09:32.857502   50077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:09:32.858984   50077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:09:32.860444   50077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:09:32.861833   50077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:09:32.863229   50077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:09:32.864917   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:09:32.865284   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.865345   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.879470   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1024 20:09:32.879865   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.880332   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.880355   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.880731   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.880894   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.882647   50077 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:09:32.884050   50077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:09:32.884316   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.884351   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.897671   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I1024 20:09:32.898054   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.898495   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.898521   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.898837   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.899002   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.933365   50077 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:09:32.934993   50077 start.go:298] selected driver: kvm2
	I1024 20:09:32.935008   50077 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.935100   50077 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:09:32.935713   50077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.935789   50077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:09:32.949274   50077 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:09:32.949613   50077 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:09:32.949670   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:09:32.949682   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:09:32.949693   50077 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.949823   50077 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.951734   50077 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:09:31.289529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:32.953102   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:09:32.953131   50077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:09:32.953140   50077 cache.go:57] Caching tarball of preloaded images
	I1024 20:09:32.953220   50077 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:09:32.953230   50077 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:09:32.953361   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:09:32.953531   50077 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:09:37.369555   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:40.441571   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:46.521544   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:49.593529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:55.673497   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:58.745605   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:04.825563   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:07.897530   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:13.977541   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:17.049658   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:23.129561   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:26.201528   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:32.281583   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:35.353592   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:41.433570   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:44.505586   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:50.585514   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:53.657506   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:59.737620   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:02.809631   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:05.812536   49198 start.go:369] acquired machines lock for "embed-certs-867165" in 4m26.940203259s
	I1024 20:11:05.812584   49198 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:05.812594   49198 fix.go:54] fixHost starting: 
	I1024 20:11:05.812911   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:05.812959   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:05.827853   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I1024 20:11:05.828400   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:05.828896   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:05.828922   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:05.829237   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:05.829432   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:05.829588   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:05.831229   49198 fix.go:102] recreateIfNeeded on embed-certs-867165: state=Stopped err=<nil>
	I1024 20:11:05.831249   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	W1024 20:11:05.831407   49198 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:05.833007   49198 out.go:177] * Restarting existing kvm2 VM for "embed-certs-867165" ...
	I1024 20:11:05.810496   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:05.810546   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:11:05.812388   49071 machine.go:91] provisioned docker machine in 4m37.419019216s
	I1024 20:11:05.812422   49071 fix.go:56] fixHost completed within 4m37.4383256s
	I1024 20:11:05.812427   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 4m37.438344867s
	W1024 20:11:05.812453   49071 start.go:691] error starting host: provision: host is not running
	W1024 20:11:05.812538   49071 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1024 20:11:05.812551   49071 start.go:706] Will try again in 5 seconds ...
	I1024 20:11:05.834235   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Start
	I1024 20:11:05.834397   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring networks are active...
	I1024 20:11:05.835212   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network default is active
	I1024 20:11:05.835540   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network mk-embed-certs-867165 is active
	I1024 20:11:05.835850   49198 main.go:141] libmachine: (embed-certs-867165) Getting domain xml...
	I1024 20:11:05.836556   49198 main.go:141] libmachine: (embed-certs-867165) Creating domain...
	I1024 20:11:07.054253   49198 main.go:141] libmachine: (embed-certs-867165) Waiting to get IP...
	I1024 20:11:07.055379   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.055819   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.055911   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.055829   50328 retry.go:31] will retry after 212.147571ms: waiting for machine to come up
	I1024 20:11:07.269505   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.269953   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.270002   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.269942   50328 retry.go:31] will retry after 308.705783ms: waiting for machine to come up
	I1024 20:11:07.580602   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.581075   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.581103   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.581041   50328 retry.go:31] will retry after 467.682838ms: waiting for machine to come up
	I1024 20:11:08.050725   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.051121   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.051154   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.051070   50328 retry.go:31] will retry after 399.648518ms: waiting for machine to come up
	I1024 20:11:08.452605   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.452968   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.452999   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.452906   50328 retry.go:31] will retry after 617.165915ms: waiting for machine to come up
	I1024 20:11:10.812763   49071 start.go:365] acquiring machines lock for no-preload-014826: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:11:09.071803   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.072236   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.072268   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.072205   50328 retry.go:31] will retry after 678.895198ms: waiting for machine to come up
	I1024 20:11:09.753179   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.753658   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.753689   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.753600   50328 retry.go:31] will retry after 807.254598ms: waiting for machine to come up
	I1024 20:11:10.562345   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:10.562733   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:10.562761   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:10.562688   50328 retry.go:31] will retry after 921.950476ms: waiting for machine to come up
	I1024 20:11:11.485981   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:11.486498   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:11.486524   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:11.486452   50328 retry.go:31] will retry after 1.56679652s: waiting for machine to come up
	I1024 20:11:13.055209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:13.055638   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:13.055664   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:13.055594   50328 retry.go:31] will retry after 2.296157501s: waiting for machine to come up
	I1024 20:11:15.355156   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:15.355522   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:15.355555   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:15.355460   50328 retry.go:31] will retry after 1.913484523s: waiting for machine to come up
	I1024 20:11:17.270771   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:17.271200   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:17.271237   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:17.271154   50328 retry.go:31] will retry after 2.867410465s: waiting for machine to come up
	I1024 20:11:20.142209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:20.142651   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:20.142675   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:20.142603   50328 retry.go:31] will retry after 4.193720328s: waiting for machine to come up
	I1024 20:11:25.925856   49708 start.go:369] acquired machines lock for "default-k8s-diff-port-643126" in 3m22.313323811s
	I1024 20:11:25.925904   49708 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:25.925911   49708 fix.go:54] fixHost starting: 
	I1024 20:11:25.926296   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:25.926331   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:25.942871   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I1024 20:11:25.943321   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:25.943866   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:11:25.943890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:25.944187   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:25.944359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:25.944510   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:11:25.945833   49708 fix.go:102] recreateIfNeeded on default-k8s-diff-port-643126: state=Stopped err=<nil>
	I1024 20:11:25.945875   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	W1024 20:11:25.946039   49708 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:25.949057   49708 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-643126" ...
	I1024 20:11:24.340353   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.340876   49198 main.go:141] libmachine: (embed-certs-867165) Found IP for machine: 192.168.72.10
	I1024 20:11:24.340899   49198 main.go:141] libmachine: (embed-certs-867165) Reserving static IP address...
	I1024 20:11:24.340912   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has current primary IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.341389   49198 main.go:141] libmachine: (embed-certs-867165) Reserved static IP address: 192.168.72.10
	I1024 20:11:24.341430   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.341453   49198 main.go:141] libmachine: (embed-certs-867165) Waiting for SSH to be available...
	I1024 20:11:24.341482   49198 main.go:141] libmachine: (embed-certs-867165) DBG | skip adding static IP to network mk-embed-certs-867165 - found existing host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"}
	I1024 20:11:24.341500   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Getting to WaitForSSH function...
	I1024 20:11:24.343707   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344021   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.344050   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344202   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH client type: external
	I1024 20:11:24.344229   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa (-rw-------)
	I1024 20:11:24.344263   49198 main.go:141] libmachine: (embed-certs-867165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:24.344279   49198 main.go:141] libmachine: (embed-certs-867165) DBG | About to run SSH command:
	I1024 20:11:24.344290   49198 main.go:141] libmachine: (embed-certs-867165) DBG | exit 0
	I1024 20:11:24.433113   49198 main.go:141] libmachine: (embed-certs-867165) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:24.433578   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetConfigRaw
	I1024 20:11:24.434267   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.436768   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437149   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.437178   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437479   49198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/config.json ...
	I1024 20:11:24.437738   49198 machine.go:88] provisioning docker machine ...
	I1024 20:11:24.437760   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:24.438014   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438217   49198 buildroot.go:166] provisioning hostname "embed-certs-867165"
	I1024 20:11:24.438245   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438431   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.440509   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440861   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.440884   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440998   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.441155   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441329   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441499   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.441644   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.441990   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.442009   49198 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-867165 && echo "embed-certs-867165" | sudo tee /etc/hostname
	I1024 20:11:24.570417   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-867165
	
	I1024 20:11:24.570456   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.573010   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573421   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.573446   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573634   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.573845   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574000   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574100   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.574296   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.574611   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.574628   49198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-867165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-867165/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-867165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:24.698255   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:24.698281   49198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:24.698298   49198 buildroot.go:174] setting up certificates
	I1024 20:11:24.698306   49198 provision.go:83] configureAuth start
	I1024 20:11:24.698317   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.698624   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.701552   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.701900   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.701954   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.702044   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.704047   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704389   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.704413   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704578   49198 provision.go:138] copyHostCerts
	I1024 20:11:24.704632   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:24.704648   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:24.704713   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:24.704794   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:24.704801   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:24.704828   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:24.704877   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:24.704883   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:24.704901   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:24.704961   49198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.embed-certs-867165 san=[192.168.72.10 192.168.72.10 localhost 127.0.0.1 minikube embed-certs-867165]
	I1024 20:11:25.212018   49198 provision.go:172] copyRemoteCerts
	I1024 20:11:25.212075   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:25.212095   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.214791   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215112   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.215141   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215262   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.215490   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.215682   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.215805   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.301782   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:25.324352   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1024 20:11:25.346349   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:11:25.368012   49198 provision.go:86] duration metric: configureAuth took 669.695412ms
	I1024 20:11:25.368036   49198 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:25.368205   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:25.368269   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.370479   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370739   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.370782   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370873   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.371063   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371395   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371593   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.371760   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.372083   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.372098   49198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:25.685250   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:25.685327   49198 machine.go:91] provisioned docker machine in 1.247541762s
	I1024 20:11:25.685347   49198 start.go:300] post-start starting for "embed-certs-867165" (driver="kvm2")
	I1024 20:11:25.685363   49198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:25.685388   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.685781   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:25.685813   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.688378   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688666   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.688712   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688886   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.689115   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.689274   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.689463   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.775321   49198 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:25.779494   49198 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:25.779516   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:25.779590   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:25.779663   49198 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:25.779748   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:25.788441   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:25.809843   49198 start.go:303] post-start completed in 124.478424ms
	I1024 20:11:25.809946   49198 fix.go:56] fixHost completed within 19.997269664s
	I1024 20:11:25.809985   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.812709   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813101   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.813133   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813265   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.813464   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813650   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.813962   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.814293   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.814309   49198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:25.925691   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178285.873274561
	
	I1024 20:11:25.925721   49198 fix.go:206] guest clock: 1698178285.873274561
	I1024 20:11:25.925731   49198 fix.go:219] Guest: 2023-10-24 20:11:25.873274561 +0000 UTC Remote: 2023-10-24 20:11:25.809967209 +0000 UTC m=+287.089115618 (delta=63.307352ms)
	I1024 20:11:25.925760   49198 fix.go:190] guest clock delta is within tolerance: 63.307352ms
	I1024 20:11:25.925767   49198 start.go:83] releasing machines lock for "embed-certs-867165", held for 20.113201351s
	I1024 20:11:25.925801   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.926046   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:25.928979   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929337   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.929369   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929547   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930171   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930239   49198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:25.930285   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.930332   49198 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:25.930356   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.932685   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.932918   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933167   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933197   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933225   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933254   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933377   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933548   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933600   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933758   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933773   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933941   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.934075   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:26.046804   49198 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:26.052139   49198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:26.195404   49198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:26.201515   49198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:26.201602   49198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:26.215298   49198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:26.215312   49198 start.go:472] detecting cgroup driver to use...
	I1024 20:11:26.215375   49198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:26.228683   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:26.240279   49198 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:26.240328   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:26.252314   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:26.264748   49198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:26.363370   49198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:26.472219   49198 docker.go:214] disabling docker service ...
	I1024 20:11:26.472293   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:26.485325   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:26.497949   49198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:26.614981   49198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:26.731140   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:26.750199   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:26.770158   49198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:26.770224   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.781180   49198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:26.781246   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.791901   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.802435   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.812848   49198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:26.826330   49198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:26.837268   49198 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:26.837350   49198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:26.853637   49198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:26.866347   49198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:26.985185   49198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:27.154650   49198 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:27.154718   49198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:27.160801   49198 start.go:540] Will wait 60s for crictl version
	I1024 20:11:27.160848   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:11:27.164920   49198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:27.202690   49198 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:27.202779   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.250594   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.296108   49198 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:25.950421   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Start
	I1024 20:11:25.950594   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring networks are active...
	I1024 20:11:25.951296   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network default is active
	I1024 20:11:25.951666   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network mk-default-k8s-diff-port-643126 is active
	I1024 20:11:25.952059   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Getting domain xml...
	I1024 20:11:25.952807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Creating domain...
	I1024 20:11:27.231286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting to get IP...
	I1024 20:11:27.232283   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232673   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232749   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.232677   50444 retry.go:31] will retry after 208.58934ms: waiting for machine to come up
	I1024 20:11:27.443376   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443879   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443919   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.443821   50444 retry.go:31] will retry after 257.382495ms: waiting for machine to come up
	I1024 20:11:27.703424   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.703968   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.704002   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.703931   50444 retry.go:31] will retry after 397.047762ms: waiting for machine to come up
	I1024 20:11:28.102593   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103138   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103169   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.103091   50444 retry.go:31] will retry after 512.560427ms: waiting for machine to come up
	I1024 20:11:27.297540   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:27.300396   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.300799   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:27.300829   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.301066   49198 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:27.305045   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:27.320300   49198 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:27.320366   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:27.359702   49198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:27.359766   49198 ssh_runner.go:195] Run: which lz4
	I1024 20:11:27.363540   49198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:27.367559   49198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:27.367583   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:28.616845   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617310   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.617240   50444 retry.go:31] will retry after 674.554893ms: waiting for machine to come up
	I1024 20:11:29.293139   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293640   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293667   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:29.293603   50444 retry.go:31] will retry after 903.982479ms: waiting for machine to come up
	I1024 20:11:30.199764   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200218   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:30.200119   50444 retry.go:31] will retry after 835.036056ms: waiting for machine to come up
	I1024 20:11:31.037123   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037584   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037609   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:31.037524   50444 retry.go:31] will retry after 1.242617103s: waiting for machine to come up
	I1024 20:11:32.281808   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282287   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282312   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:32.282243   50444 retry.go:31] will retry after 1.694327665s: waiting for machine to come up
	I1024 20:11:29.249631   49198 crio.go:444] Took 1.886122 seconds to copy over tarball
	I1024 20:11:29.249712   49198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:32.249370   49198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.999632152s)
	I1024 20:11:32.249396   49198 crio.go:451] Took 2.999736 seconds to extract the tarball
	I1024 20:11:32.249404   49198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:32.290929   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:32.335293   49198 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:32.335313   49198 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:32.335377   49198 ssh_runner.go:195] Run: crio config
	I1024 20:11:32.394988   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:32.395016   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:32.395039   49198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:32.395066   49198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-867165 NodeName:embed-certs-867165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:32.395267   49198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-867165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:32.395363   49198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-867165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:11:32.395412   49198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:32.408764   49198 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:32.408827   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:32.417504   49198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:11:32.433991   49198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:32.450599   49198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1024 20:11:32.467822   49198 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:32.471830   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:32.485398   49198 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165 for IP: 192.168.72.10
	I1024 20:11:32.485440   49198 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:32.485591   49198 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:32.485627   49198 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:32.485692   49198 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/client.key
	I1024 20:11:32.485751   49198 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key.802f554a
	I1024 20:11:32.485787   49198 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key
	I1024 20:11:32.485883   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:32.485913   49198 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:32.485924   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:32.485946   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:32.485974   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:32.485999   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:32.486054   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:32.486664   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:32.510981   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:32.533691   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:32.556372   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:32.578805   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:32.601563   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:32.624846   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:32.648498   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:32.672429   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:32.696146   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:32.719078   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:32.742894   49198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:32.758998   49198 ssh_runner.go:195] Run: openssl version
	I1024 20:11:32.764797   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:32.774075   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778755   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778809   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.784097   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:32.793365   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:32.802532   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806890   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806936   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.812430   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:32.821767   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:32.830930   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835401   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835455   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.840880   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:32.850124   49198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:32.854525   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:32.860161   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:32.866096   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:32.873246   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:32.880430   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:32.887436   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:32.892960   49198 kubeadm.go:404] StartCluster: {Name:embed-certs-867165 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:32.893073   49198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:32.893116   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:32.930748   49198 cri.go:89] found id: ""
	I1024 20:11:32.930817   49198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:32.939716   49198 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:32.939738   49198 kubeadm.go:636] restartCluster start
	I1024 20:11:32.939785   49198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:32.947747   49198 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.948905   49198 kubeconfig.go:92] found "embed-certs-867165" server: "https://192.168.72.10:8443"
	I1024 20:11:32.951235   49198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:32.959165   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.959215   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.970896   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.970912   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.970957   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.980621   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.481345   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:33.481442   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:33.492666   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.979087   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979490   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979520   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:33.979433   50444 retry.go:31] will retry after 1.877176786s: waiting for machine to come up
	I1024 20:11:35.859337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859758   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:35.859683   50444 retry.go:31] will retry after 2.235459842s: waiting for machine to come up
	I1024 20:11:38.097481   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097958   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:38.097878   50444 retry.go:31] will retry after 3.083066899s: waiting for machine to come up
	I1024 20:11:33.981370   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.077568   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.088845   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.481489   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.481554   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.492934   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.981614   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.981744   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.993154   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.480679   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.480752   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.492474   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.981612   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.981703   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.992389   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.480877   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.480982   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.492142   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.980700   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.980784   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.992439   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.480962   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.481040   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.492219   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.980706   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.980814   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.992040   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:38.481668   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.481764   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.493319   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.182306   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182647   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182674   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:41.182602   50444 retry.go:31] will retry after 3.348794863s: waiting for machine to come up
	I1024 20:11:38.981418   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.981504   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.992810   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.481357   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.481448   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.492521   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.981019   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.981109   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.992766   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.481341   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.481404   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.492180   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.981106   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.981205   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.991931   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.481563   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.481629   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.492601   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.981132   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.981226   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.992061   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.481647   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:42.481713   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:42.492524   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.960175   49198 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:11:42.960230   49198 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:11:42.960243   49198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:11:42.960322   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:42.998685   49198 cri.go:89] found id: ""
	I1024 20:11:42.998794   49198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:11:43.013829   49198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:11:43.023081   49198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:11:43.023161   49198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032165   49198 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032189   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:43.148027   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:45.942484   50077 start.go:369] acquired machines lock for "old-k8s-version-467375" in 2m12.988914754s
	I1024 20:11:45.942540   50077 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:45.942548   50077 fix.go:54] fixHost starting: 
	I1024 20:11:45.942969   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:45.943007   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:45.960424   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I1024 20:11:45.960851   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:45.961468   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:11:45.961498   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:45.961852   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:45.962045   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:11:45.962231   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:11:45.963803   50077 fix.go:102] recreateIfNeeded on old-k8s-version-467375: state=Stopped err=<nil>
	I1024 20:11:45.963841   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	W1024 20:11:45.964018   50077 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:45.965809   50077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467375" ...
	I1024 20:11:44.535120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535710   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Found IP for machine: 192.168.61.148
	I1024 20:11:44.535735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has current primary IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserving static IP address...
	I1024 20:11:44.536160   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserved static IP address: 192.168.61.148
	I1024 20:11:44.536181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for SSH to be available...
	I1024 20:11:44.536196   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.536225   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | skip adding static IP to network mk-default-k8s-diff-port-643126 - found existing host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"}
	I1024 20:11:44.536247   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Getting to WaitForSSH function...
	I1024 20:11:44.538297   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538634   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.538669   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH client type: external
	I1024 20:11:44.538846   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa (-rw-------)
	I1024 20:11:44.538897   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:44.538935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | About to run SSH command:
	I1024 20:11:44.538947   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | exit 0
	I1024 20:11:44.629136   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:44.629505   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetConfigRaw
	I1024 20:11:44.630190   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.632462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.632782   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.632807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.633035   49708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/config.json ...
	I1024 20:11:44.633215   49708 machine.go:88] provisioning docker machine ...
	I1024 20:11:44.633231   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:44.633416   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633566   49708 buildroot.go:166] provisioning hostname "default-k8s-diff-port-643126"
	I1024 20:11:44.633580   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633778   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.635853   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636184   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.636217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636295   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.636462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636608   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.636890   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.637307   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.637328   49708 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643126 && echo "default-k8s-diff-port-643126" | sudo tee /etc/hostname
	I1024 20:11:44.775436   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643126
	
	I1024 20:11:44.775468   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.778835   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779280   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.779316   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779494   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.779679   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779933   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.780147   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.780489   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.780518   49708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643126/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:44.921274   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:44.921332   49708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:44.921368   49708 buildroot.go:174] setting up certificates
	I1024 20:11:44.921385   49708 provision.go:83] configureAuth start
	I1024 20:11:44.921404   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.921747   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.924977   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925413   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.925443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925641   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.928106   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.928484   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928617   49708 provision.go:138] copyHostCerts
	I1024 20:11:44.928680   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:44.928703   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:44.928772   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:44.928918   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:44.928935   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:44.928969   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:44.929052   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:44.929063   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:44.929089   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:44.929157   49708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643126 san=[192.168.61.148 192.168.61.148 localhost 127.0.0.1 minikube default-k8s-diff-port-643126]
	I1024 20:11:45.170614   49708 provision.go:172] copyRemoteCerts
	I1024 20:11:45.170679   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:45.170706   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.173876   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.174251   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.174744   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.174909   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.175033   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.266012   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 20:11:45.294626   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:11:45.323773   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:45.347515   49708 provision.go:86] duration metric: configureAuth took 426.107365ms
	I1024 20:11:45.347536   49708 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:45.347741   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:45.347830   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.351151   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.351560   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351729   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.351898   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.352540   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.353017   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.353045   49708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:45.673767   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:45.673797   49708 machine.go:91] provisioned docker machine in 1.04057128s
	I1024 20:11:45.673809   49708 start.go:300] post-start starting for "default-k8s-diff-port-643126" (driver="kvm2")
	I1024 20:11:45.673821   49708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:45.673844   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.674180   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:45.674213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.677192   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677621   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.677663   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677817   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.678021   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.678180   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.678322   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.769507   49708 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:45.774136   49708 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:45.774161   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:45.774240   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:45.774333   49708 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:45.774456   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:45.782941   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:45.806536   49708 start.go:303] post-start completed in 132.710109ms
	I1024 20:11:45.806565   49708 fix.go:56] fixHost completed within 19.880653804s
	I1024 20:11:45.806602   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.809496   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.809854   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.809892   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.810096   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.810335   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810697   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.810870   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.811297   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.811312   49708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:45.942309   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178305.886866858
	
	I1024 20:11:45.942334   49708 fix.go:206] guest clock: 1698178305.886866858
	I1024 20:11:45.942343   49708 fix.go:219] Guest: 2023-10-24 20:11:45.886866858 +0000 UTC Remote: 2023-10-24 20:11:45.806569839 +0000 UTC m=+222.349889294 (delta=80.297019ms)
	I1024 20:11:45.942388   49708 fix.go:190] guest clock delta is within tolerance: 80.297019ms
	I1024 20:11:45.942399   49708 start.go:83] releasing machines lock for "default-k8s-diff-port-643126", held for 20.016514097s
	I1024 20:11:45.942428   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.942819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:45.946079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946507   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.946548   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946681   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947353   49708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:45.947411   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.947564   49708 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:45.947591   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.950504   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.950930   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.950984   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951010   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951176   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.951499   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.951522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.951526   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951638   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.951793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951946   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.952178   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.952345   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:46.043544   49708 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:46.072510   49708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:46.230010   49708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:46.237538   49708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:46.237608   49708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:46.259449   49708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:46.259468   49708 start.go:472] detecting cgroup driver to use...
	I1024 20:11:46.259530   49708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:46.278708   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:46.292769   49708 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:46.292827   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:46.311808   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:46.329420   49708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:46.452375   49708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:46.584041   49708 docker.go:214] disabling docker service ...
	I1024 20:11:46.584114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:46.606114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:46.623302   49708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:46.732771   49708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:46.862687   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:46.879573   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:46.900885   49708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:46.900955   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.911441   49708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:46.911500   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.921674   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.931937   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.942104   49708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:46.952610   49708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:46.961808   49708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:46.961884   49708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:46.977789   49708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:46.990089   49708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:47.130248   49708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:47.307336   49708 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:47.307402   49708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:47.316743   49708 start.go:540] Will wait 60s for crictl version
	I1024 20:11:47.316795   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:11:47.321526   49708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:47.369079   49708 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:47.369169   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.419428   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.477016   49708 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:45.967071   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Start
	I1024 20:11:45.967249   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring networks are active...
	I1024 20:11:45.967957   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network default is active
	I1024 20:11:45.968324   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network mk-old-k8s-version-467375 is active
	I1024 20:11:45.968743   50077 main.go:141] libmachine: (old-k8s-version-467375) Getting domain xml...
	I1024 20:11:45.969525   50077 main.go:141] libmachine: (old-k8s-version-467375) Creating domain...
	I1024 20:11:47.346548   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting to get IP...
	I1024 20:11:47.347505   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.347894   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.347980   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.347887   50579 retry.go:31] will retry after 232.244798ms: waiting for machine to come up
	I1024 20:11:47.581582   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.582087   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.582118   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.582044   50579 retry.go:31] will retry after 319.930019ms: waiting for machine to come up
	I1024 20:11:47.478565   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:47.481659   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482040   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:47.482066   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482265   49708 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:47.487054   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:47.499693   49708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:47.499765   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:47.551897   49708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:47.551978   49708 ssh_runner.go:195] Run: which lz4
	I1024 20:11:47.557026   49708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:47.562364   49708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:47.562393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:43.852350   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.048386   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.117774   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.202966   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:11:44.203042   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.215680   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.726471   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.226100   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.726494   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.226510   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.726607   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.758294   49198 api_server.go:72] duration metric: took 2.555329199s to wait for apiserver process to appear ...
	I1024 20:11:46.758319   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:11:46.758339   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.758872   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:46.758909   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.759318   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:47.260047   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.910793   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.910830   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:50.910852   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.943069   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.943100   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:51.259498   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.265278   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.265316   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:51.759494   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.767253   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.767280   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:52.259758   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:52.265202   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:11:52.277533   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:11:52.277561   49198 api_server.go:131] duration metric: took 5.51923389s to wait for apiserver health ...
	I1024 20:11:52.277572   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:52.277580   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:52.279542   49198 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:11:47.904065   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.904524   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.904551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.904467   50579 retry.go:31] will retry after 440.170251ms: waiting for machine to come up
	I1024 20:11:48.346206   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.346778   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.346802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.346686   50579 retry.go:31] will retry after 472.001777ms: waiting for machine to come up
	I1024 20:11:48.820100   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.820625   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.820660   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.820533   50579 retry.go:31] will retry after 487.055032ms: waiting for machine to come up
	I1024 20:11:49.309351   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:49.309816   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:49.309836   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:49.309751   50579 retry.go:31] will retry after 945.474211ms: waiting for machine to come up
	I1024 20:11:50.257106   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:50.257611   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:50.257641   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:50.257563   50579 retry.go:31] will retry after 915.312538ms: waiting for machine to come up
	I1024 20:11:51.174245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:51.174832   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:51.174889   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:51.174792   50579 retry.go:31] will retry after 1.09533855s: waiting for machine to come up
	I1024 20:11:52.271604   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:52.272082   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:52.272111   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:52.272041   50579 retry.go:31] will retry after 1.411155014s: waiting for machine to come up
	I1024 20:11:49.517078   49708 crio.go:444] Took 1.960093 seconds to copy over tarball
	I1024 20:11:49.517170   49708 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:53.113830   49708 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.596633239s)
	I1024 20:11:53.113858   49708 crio.go:451] Took 3.596755 seconds to extract the tarball
	I1024 20:11:53.113865   49708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:53.157476   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:53.204980   49708 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:53.205004   49708 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:53.205090   49708 ssh_runner.go:195] Run: crio config
	I1024 20:11:53.264588   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:11:53.264613   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:53.264634   49708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:53.264662   49708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.148 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643126 NodeName:default-k8s-diff-port-643126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:53.264869   49708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643126"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:53.264975   49708 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-643126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1024 20:11:53.265054   49708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:53.275886   49708 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:53.275982   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:53.286132   49708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1024 20:11:53.303735   49708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:53.319522   49708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1024 20:11:53.338388   49708 ssh_runner.go:195] Run: grep 192.168.61.148	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:53.343108   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:53.355662   49708 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126 for IP: 192.168.61.148
	I1024 20:11:53.355709   49708 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:53.355873   49708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:53.355910   49708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:53.356023   49708 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.key
	I1024 20:11:53.356086   49708 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key.8ba5a111
	I1024 20:11:53.356122   49708 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key
	I1024 20:11:53.356237   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:53.356265   49708 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:53.356275   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:53.356299   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:53.356320   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:53.356341   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:53.356377   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:53.357029   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:53.379968   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:53.401871   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:53.423699   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:53.445338   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:53.469994   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:53.495061   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:52.281055   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:11:52.299421   49198 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:11:52.322020   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:11:52.334273   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:11:52.334318   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:11:52.334332   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:11:52.334356   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:11:52.334372   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:11:52.334389   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:11:52.334401   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:11:52.334413   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:11:52.334425   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:11:52.334438   49198 system_pods.go:74] duration metric: took 12.395036ms to wait for pod list to return data ...
	I1024 20:11:52.334450   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:11:52.338486   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:11:52.338518   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:11:52.338530   49198 node_conditions.go:105] duration metric: took 4.073559ms to run NodePressure ...
	I1024 20:11:52.338555   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:55.075569   49198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.736987276s)
	I1024 20:11:55.075611   49198 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080481   49198 kubeadm.go:787] kubelet initialised
	I1024 20:11:55.080508   49198 kubeadm.go:788] duration metric: took 4.884507ms waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080519   49198 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:55.087371   49198 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.092583   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092616   49198 pod_ready.go:81] duration metric: took 5.215308ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.092627   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092636   49198 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.098518   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098540   49198 pod_ready.go:81] duration metric: took 5.887969ms waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.098551   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098560   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.103375   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103400   49198 pod_ready.go:81] duration metric: took 4.83092ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.103411   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103419   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.108416   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108443   49198 pod_ready.go:81] duration metric: took 5.016219ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.108454   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108462   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.482846   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482873   49198 pod_ready.go:81] duration metric: took 374.401616ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.482885   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482897   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.879895   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879922   49198 pod_ready.go:81] duration metric: took 397.016576ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.879935   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879947   49198 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:56.280405   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280445   49198 pod_ready.go:81] duration metric: took 400.488591ms waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:56.280464   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280475   49198 pod_ready.go:38] duration metric: took 1.19994252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:56.280498   49198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:11:56.298423   49198 ops.go:34] apiserver oom_adj: -16
	I1024 20:11:56.298445   49198 kubeadm.go:640] restartCluster took 23.358699894s
	I1024 20:11:56.298455   49198 kubeadm.go:406] StartCluster complete in 23.405500606s
	I1024 20:11:56.298474   49198 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.298551   49198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:11:56.300724   49198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.300999   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:11:56.301104   49198 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:11:56.301193   49198 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-867165"
	I1024 20:11:56.301203   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:56.301216   49198 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-867165"
	W1024 20:11:56.301261   49198 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:11:56.301260   49198 addons.go:69] Setting metrics-server=true in profile "embed-certs-867165"
	I1024 20:11:56.301290   49198 addons.go:69] Setting default-storageclass=true in profile "embed-certs-867165"
	I1024 20:11:56.301312   49198 addons.go:231] Setting addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:56.301315   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	W1024 20:11:56.301328   49198 addons.go:240] addon metrics-server should already be in state true
	I1024 20:11:56.301331   49198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-867165"
	I1024 20:11:56.301418   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.301743   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301744   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301767   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301771   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301826   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301867   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.307030   49198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-867165" context rescaled to 1 replicas
	I1024 20:11:56.307062   49198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:11:56.309053   49198 out.go:177] * Verifying Kubernetes components...
	I1024 20:11:56.310743   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:11:56.317523   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1024 20:11:56.317889   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.318430   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.318450   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.318881   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.319437   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.319486   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.320723   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1024 20:11:56.320906   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1024 20:11:56.321377   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.321491   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.322079   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322107   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322370   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322389   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322464   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322770   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.323410   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.323444   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.326654   49198 addons.go:231] Setting addon default-storageclass=true in "embed-certs-867165"
	W1024 20:11:56.326674   49198 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:11:56.326700   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.327084   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.327111   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.335811   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I1024 20:11:56.336310   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.336762   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.336774   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.337109   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.337272   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.338868   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.340964   49198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:11:56.342438   49198 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.342454   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:11:56.342472   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.341955   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I1024 20:11:56.343402   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.344019   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.344038   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.344502   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.344694   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.345753   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346097   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I1024 20:11:56.346367   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.346398   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346660   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.346666   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.346829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.348534   49198 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:11:53.684729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:53.685093   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:53.685129   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:53.685030   50579 retry.go:31] will retry after 1.793178726s: waiting for machine to come up
	I1024 20:11:55.481150   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:55.481696   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:55.481729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:55.481639   50579 retry.go:31] will retry after 2.680463816s: waiting for machine to come up
	I1024 20:11:56.347164   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.347192   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.350114   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.350141   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:11:56.350155   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:11:56.350174   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.350270   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.350397   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.350847   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.351478   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.351514   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.354060   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354451   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.354472   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354625   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.354819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.354978   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.355161   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.371309   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1024 20:11:56.371746   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.372300   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.372325   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.372764   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.372981   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.374651   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.374894   49198 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.374911   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:11:56.374934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.377962   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378385   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.378408   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378585   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.378789   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.378954   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.379083   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.471271   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.504355   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:11:56.504382   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:11:56.552351   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.576037   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:11:56.576068   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:11:56.606745   49198 node_ready.go:35] waiting up to 6m0s for node "embed-certs-867165" to be "Ready" ...
	I1024 20:11:56.606772   49198 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:11:56.620862   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:56.620897   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:11:56.676519   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:57.851757   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.380440836s)
	I1024 20:11:57.851814   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851816   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299429923s)
	I1024 20:11:57.851829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.851865   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851882   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852242   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852262   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852272   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852282   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852368   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852412   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852441   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852467   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852412   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852537   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852560   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852814   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852859   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852877   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860105   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183533543s)
	I1024 20:11:57.860176   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860195   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860492   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860494   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860515   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860526   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860537   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860828   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860857   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860876   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860886   49198 addons.go:467] Verifying addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:57.860990   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.861011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.861220   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.861227   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.861236   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.864370   49198 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1024 20:11:53.521030   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:53.844700   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:53.868393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:53.892495   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:53.916345   49708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:53.935576   49708 ssh_runner.go:195] Run: openssl version
	I1024 20:11:53.943066   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:53.957325   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.962959   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.963026   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.969104   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:53.980253   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:53.990977   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995906   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995992   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:54.001847   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:54.012635   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:54.023490   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028300   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028355   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.033965   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:54.044984   49708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:54.049588   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:54.055434   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:54.061692   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:54.068131   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:54.074484   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:54.080349   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:54.086499   49708 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-643126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:54.086598   49708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:54.086655   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:54.127406   49708 cri.go:89] found id: ""
	I1024 20:11:54.127494   49708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:54.137720   49708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:54.137743   49708 kubeadm.go:636] restartCluster start
	I1024 20:11:54.137801   49708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:54.147925   49708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.149006   49708 kubeconfig.go:92] found "default-k8s-diff-port-643126" server: "https://192.168.61.148:8444"
	I1024 20:11:54.151513   49708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:54.162303   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.162371   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.173715   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.173763   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.173816   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.184641   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.685342   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.685431   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.698640   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.185173   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.185284   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.201355   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.684814   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.684885   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.696664   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.185711   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.185795   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.201419   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.684932   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.685029   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.701458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.185009   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.185111   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.201166   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.685654   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.685739   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.701496   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:58.185022   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.185076   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.197394   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.865715   49198 addons.go:502] enable addons completed in 1.564611111s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1024 20:11:58.683275   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:58.163942   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:58.164342   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:58.164369   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:58.164308   50579 retry.go:31] will retry after 2.238050336s: waiting for machine to come up
	I1024 20:12:00.403552   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:00.403947   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:00.403975   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:00.403907   50579 retry.go:31] will retry after 3.901299207s: waiting for machine to come up
	I1024 20:11:58.685131   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.685225   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.700458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.184854   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.184936   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.200498   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.685159   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.685260   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.698793   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.185350   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.185418   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.200046   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.685255   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.685341   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.698229   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.185036   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.185105   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.200083   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.685617   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.685700   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.697442   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.184897   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.184980   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.196208   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.685769   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.685854   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.697356   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:03.184898   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.184977   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.196522   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.684425   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:12:01.683130   49198 node_ready.go:49] node "embed-certs-867165" has status "Ready":"True"
	I1024 20:12:01.683154   49198 node_ready.go:38] duration metric: took 5.076371929s waiting for node "embed-certs-867165" to be "Ready" ...
	I1024 20:12:01.683162   49198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:01.689566   49198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695393   49198 pod_ready.go:92] pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:01.695416   49198 pod_ready.go:81] duration metric: took 5.827696ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695427   49198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:03.712775   49198 pod_ready.go:102] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:04.306338   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:04.306804   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:04.306835   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:04.306770   50579 retry.go:31] will retry after 5.15211395s: waiting for machine to come up
	I1024 20:12:03.685737   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.685827   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.697510   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:04.163385   49708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:04.163416   49708 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:04.163449   49708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:04.163520   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:04.209780   49708 cri.go:89] found id: ""
	I1024 20:12:04.209834   49708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:04.226347   49708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:04.235134   49708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:04.235185   49708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243361   49708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243380   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:04.370510   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.461155   49708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090606159s)
	I1024 20:12:05.461192   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.649281   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.742338   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.829426   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:05.829494   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:05.841869   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.356907   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.856157   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.356140   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.856020   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.356129   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.382595   49708 api_server.go:72] duration metric: took 2.553177252s to wait for apiserver process to appear ...
	I1024 20:12:08.382622   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:08.382641   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:04.213550   49198 pod_ready.go:92] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.213573   49198 pod_ready.go:81] duration metric: took 2.518138084s waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.213585   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218813   49198 pod_ready.go:92] pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.218841   49198 pod_ready.go:81] duration metric: took 5.247061ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218855   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224562   49198 pod_ready.go:92] pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.224585   49198 pod_ready.go:81] duration metric: took 5.720637ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224597   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484197   49198 pod_ready.go:92] pod "kube-proxy-thkqr" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.484216   49198 pod_ready.go:81] duration metric: took 259.611869ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484224   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883941   49198 pod_ready.go:92] pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.883968   49198 pod_ready.go:81] duration metric: took 399.73679ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883982   49198 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:07.193414   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:10.878419   49071 start.go:369] acquired machines lock for "no-preload-014826" in 1m0.065559113s
	I1024 20:12:10.878467   49071 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:12:10.878475   49071 fix.go:54] fixHost starting: 
	I1024 20:12:10.878869   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:10.878901   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:10.898307   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I1024 20:12:10.898732   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:10.899250   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:12:10.899268   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:10.899614   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:10.899790   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:10.899933   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:12:10.901569   49071 fix.go:102] recreateIfNeeded on no-preload-014826: state=Stopped err=<nil>
	I1024 20:12:10.901593   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	W1024 20:12:10.901753   49071 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:12:10.904367   49071 out.go:177] * Restarting existing kvm2 VM for "no-preload-014826" ...
	I1024 20:12:09.462373   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.462813   50077 main.go:141] libmachine: (old-k8s-version-467375) Found IP for machine: 192.168.39.71
	I1024 20:12:09.462836   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserving static IP address...
	I1024 20:12:09.462853   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has current primary IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.463385   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.463423   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserved static IP address: 192.168.39.71
	I1024 20:12:09.463442   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | skip adding static IP to network mk-old-k8s-version-467375 - found existing host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"}
	I1024 20:12:09.463463   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Getting to WaitForSSH function...
	I1024 20:12:09.463484   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting for SSH to be available...
	I1024 20:12:09.465635   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.465951   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.465979   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.466131   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH client type: external
	I1024 20:12:09.466167   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa (-rw-------)
	I1024 20:12:09.466210   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:09.466227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | About to run SSH command:
	I1024 20:12:09.466256   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | exit 0
	I1024 20:12:09.565274   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:09.565647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetConfigRaw
	I1024 20:12:09.566251   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.569078   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.569585   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569863   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:12:09.570097   50077 machine.go:88] provisioning docker machine ...
	I1024 20:12:09.570122   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:09.570355   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570604   50077 buildroot.go:166] provisioning hostname "old-k8s-version-467375"
	I1024 20:12:09.570634   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570807   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.573170   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573560   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.573587   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573757   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.573934   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574080   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.574414   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.574840   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.574858   50077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467375 && echo "old-k8s-version-467375" | sudo tee /etc/hostname
	I1024 20:12:09.718150   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467375
	
	I1024 20:12:09.718201   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.721079   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721461   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.721495   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721653   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.721865   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722016   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722167   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.722324   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.722712   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.722732   50077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467375/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:09.865069   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:09.865098   50077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:09.865125   50077 buildroot.go:174] setting up certificates
	I1024 20:12:09.865136   50077 provision.go:83] configureAuth start
	I1024 20:12:09.865151   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.865449   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.868055   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868480   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.868513   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868693   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.870838   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871203   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.871227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871363   50077 provision.go:138] copyHostCerts
	I1024 20:12:09.871411   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:09.871423   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:09.871490   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:09.871613   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:09.871625   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:09.871655   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:09.871743   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:09.871753   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:09.871783   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:09.871856   50077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467375 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube old-k8s-version-467375]
	I1024 20:12:10.091178   50077 provision.go:172] copyRemoteCerts
	I1024 20:12:10.091229   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:10.091253   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.094245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094550   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.094590   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094759   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.094955   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.095123   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.095271   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.192715   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:12:10.216110   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:10.239468   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:12:10.263113   50077 provision.go:86] duration metric: configureAuth took 397.957727ms
	I1024 20:12:10.263138   50077 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:10.263366   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:12:10.263480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.265995   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266293   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.266334   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266467   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.266696   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.266863   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.267027   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.267168   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.267653   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.267677   50077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:10.596009   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:10.596032   50077 machine.go:91] provisioned docker machine in 1.025920355s
	I1024 20:12:10.596041   50077 start.go:300] post-start starting for "old-k8s-version-467375" (driver="kvm2")
	I1024 20:12:10.596050   50077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:10.596075   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.596415   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:10.596450   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.598886   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599234   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.599259   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599446   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.599647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.599812   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.599955   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.697045   50077 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:10.701363   50077 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:10.701387   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:10.701458   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:10.701546   50077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:10.701653   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:10.712072   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:10.737471   50077 start.go:303] post-start completed in 141.415073ms
	I1024 20:12:10.737508   50077 fix.go:56] fixHost completed within 24.794946143s
	I1024 20:12:10.737533   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.740438   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.740792   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.740820   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.741024   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.741247   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741428   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741691   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.741861   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.742407   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.742431   50077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:10.878250   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178330.824734287
	
	I1024 20:12:10.878273   50077 fix.go:206] guest clock: 1698178330.824734287
	I1024 20:12:10.878283   50077 fix.go:219] Guest: 2023-10-24 20:12:10.824734287 +0000 UTC Remote: 2023-10-24 20:12:10.737513672 +0000 UTC m=+157.935911605 (delta=87.220615ms)
	I1024 20:12:10.878307   50077 fix.go:190] guest clock delta is within tolerance: 87.220615ms
	I1024 20:12:10.878314   50077 start.go:83] releasing machines lock for "old-k8s-version-467375", held for 24.935800385s
	I1024 20:12:10.878347   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.878614   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:10.881335   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881746   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.881784   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881933   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882442   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882741   50077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:10.882801   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.882860   50077 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:10.882886   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.885640   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.885856   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886047   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886070   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886276   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886315   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886383   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886439   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886535   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886579   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886683   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886699   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.886816   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:11.006700   50077 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:11.012734   50077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:11.162399   50077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:11.169673   50077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:11.169751   50077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:11.184770   50077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:11.184794   50077 start.go:472] detecting cgroup driver to use...
	I1024 20:12:11.184858   50077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:11.202317   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:11.218122   50077 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:11.218187   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:11.233177   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:11.247591   50077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:11.387195   50077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:11.520544   50077 docker.go:214] disabling docker service ...
	I1024 20:12:11.520615   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:11.539166   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:11.552957   50077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:11.710494   50077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:11.837532   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:11.854418   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:11.874953   50077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 20:12:11.875040   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.887115   50077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:11.887206   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.898994   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.908652   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.918280   50077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:11.930870   50077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:11.939522   50077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:11.939580   50077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:11.955005   50077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:11.965173   50077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:12.098480   50077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:12.296897   50077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:12.296993   50077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:12.302906   50077 start.go:540] Will wait 60s for crictl version
	I1024 20:12:12.302956   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:12.307142   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:12.353253   50077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:12.353369   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.417241   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.486375   50077 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1024 20:12:12.487819   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:12.491366   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.491830   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:12.491862   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.492054   50077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:12.497705   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:12.514116   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:12:12.514208   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:12.569171   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:12.569247   50077 ssh_runner.go:195] Run: which lz4
	I1024 20:12:12.574729   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:12:12.579319   50077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:12:12.579364   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 20:12:10.905856   49071 main.go:141] libmachine: (no-preload-014826) Calling .Start
	I1024 20:12:10.906027   49071 main.go:141] libmachine: (no-preload-014826) Ensuring networks are active...
	I1024 20:12:10.906761   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network default is active
	I1024 20:12:10.907112   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network mk-no-preload-014826 is active
	I1024 20:12:10.907486   49071 main.go:141] libmachine: (no-preload-014826) Getting domain xml...
	I1024 20:12:10.908225   49071 main.go:141] libmachine: (no-preload-014826) Creating domain...
	I1024 20:12:12.324832   49071 main.go:141] libmachine: (no-preload-014826) Waiting to get IP...
	I1024 20:12:12.326055   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.326595   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.326695   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.326594   50821 retry.go:31] will retry after 197.462386ms: waiting for machine to come up
	I1024 20:12:12.526293   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.526743   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.526774   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.526720   50821 retry.go:31] will retry after 271.486585ms: waiting for machine to come up
	I1024 20:12:12.800360   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.801756   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.801940   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.801863   50821 retry.go:31] will retry after 486.882671ms: waiting for machine to come up
	I1024 20:12:12.479397   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.479431   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.479445   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:12.490441   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.490470   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.990764   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.006526   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.006556   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:13.490974   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.499731   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.499764   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:09.195216   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:11.694410   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.698362   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.991467   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:14.011775   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:12:14.048756   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:12:14.048791   49708 api_server.go:131] duration metric: took 5.666161032s to wait for apiserver health ...
	I1024 20:12:14.048802   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:12:14.048812   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:14.050652   49708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:14.052331   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:14.086953   49708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:14.142753   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:14.162085   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:12:14.162211   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:14.162246   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:12:14.162280   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:12:14.162307   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:12:14.162330   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:12:14.162352   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:12:14.162375   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:12:14.162411   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:12:14.162434   49708 system_pods.go:74] duration metric: took 19.657104ms to wait for pod list to return data ...
	I1024 20:12:14.162456   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:14.173042   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:14.173078   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:14.173093   49708 node_conditions.go:105] duration metric: took 10.618815ms to run NodePressure ...
	I1024 20:12:14.173117   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:14.763495   49708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768626   49708 kubeadm.go:787] kubelet initialised
	I1024 20:12:14.768653   49708 kubeadm.go:788] duration metric: took 5.128553ms waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768663   49708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:14.788128   49708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.800546   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800582   49708 pod_ready.go:81] duration metric: took 12.417978ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.800597   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800610   49708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.808416   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808448   49708 pod_ready.go:81] duration metric: took 7.821099ms waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.808463   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808472   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.814286   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814317   49708 pod_ready.go:81] duration metric: took 5.833548ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.814331   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814341   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.825548   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825582   49708 pod_ready.go:81] duration metric: took 11.230382ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.825596   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825606   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.168279   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168323   49708 pod_ready.go:81] duration metric: took 342.707312ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.168338   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168351   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.567697   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567735   49708 pod_ready.go:81] duration metric: took 399.371702ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.567750   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567838   49708 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.967716   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967750   49708 pod_ready.go:81] duration metric: took 399.892272ms waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.967764   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967773   49708 pod_ready.go:38] duration metric: took 1.199098599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:15.967793   49708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:12:15.986399   49708 ops.go:34] apiserver oom_adj: -16
	I1024 20:12:15.986422   49708 kubeadm.go:640] restartCluster took 21.848673162s
	I1024 20:12:15.986430   49708 kubeadm.go:406] StartCluster complete in 21.899940105s
	I1024 20:12:15.986444   49708 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.986545   49708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:12:15.989108   49708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.989647   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:15.989617   49708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:12:15.989715   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:12:15.989719   49708 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989736   49708 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-643126"
	W1024 20:12:15.989752   49708 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:12:15.989752   49708 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989775   49708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643126"
	I1024 20:12:15.989786   49708 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989802   49708 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:15.989804   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	W1024 20:12:15.989809   49708 addons.go:240] addon metrics-server should already be in state true
	I1024 20:12:15.989849   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:15.990183   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990192   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990246   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990294   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990209   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.995810   49708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643126" context rescaled to 1 replicas
	I1024 20:12:15.995838   49708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:12:15.998001   49708 out.go:177] * Verifying Kubernetes components...
	I1024 20:12:16.001589   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:12:16.010690   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I1024 20:12:16.011310   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.011861   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.011890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.012279   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.012906   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.012960   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.013706   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1024 20:12:16.014057   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.014533   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.014560   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.014905   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.015330   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I1024 20:12:16.015444   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.015486   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.015703   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.016168   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.016188   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.016591   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.016763   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.020428   49708 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-643126"
	W1024 20:12:16.020448   49708 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:12:16.020474   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:16.020840   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.020873   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.031538   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I1024 20:12:16.033822   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.034350   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.034367   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.034746   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.034802   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I1024 20:12:16.034978   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.035073   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.035525   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.035549   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.035943   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.036217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.036694   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.038891   49708 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:12:16.037871   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.040815   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:12:16.040832   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:12:16.040851   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.042238   49708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:14.393634   50077 crio.go:444] Took 1.818945 seconds to copy over tarball
	I1024 20:12:14.393720   50077 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:12:17.795931   50077 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.402175992s)
	I1024 20:12:17.795962   50077 crio.go:451] Took 3.402303 seconds to extract the tarball
	I1024 20:12:17.795974   50077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:12:17.841100   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:16.043742   49708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.043758   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:12:16.043775   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.046924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.047003   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047035   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.047068   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047224   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.049392   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049433   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.049469   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049487   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1024 20:12:16.049492   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.049976   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.050488   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.050502   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.050534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.050712   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.050810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.050844   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.050974   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.051292   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.051327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.051585   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.067412   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I1024 20:12:16.067810   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.068428   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.068445   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.068991   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.069222   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.070923   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.071196   49708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.071219   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:12:16.071238   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.074735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075400   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.075431   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075630   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.075796   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.075935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.076097   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.201177   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:12:16.201198   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:12:16.224757   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.247200   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:12:16.247225   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:12:16.259476   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.324327   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.324354   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:12:16.371331   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.384042   49708 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:16.384367   49708 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:12:17.654459   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.429657283s)
	I1024 20:12:17.654516   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.654529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.654951   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.654978   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.654990   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.655004   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.655016   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.655330   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.655353   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.672310   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.672337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.672693   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.672738   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.672761   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.138719   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.879209719s)
	I1024 20:12:18.138769   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.138783   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.139091   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139103   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139117   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.139132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139322   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139338   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139338   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.203722   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.832303736s)
	I1024 20:12:18.203776   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.203793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204088   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204106   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204118   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.204128   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204348   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.204378   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204393   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204406   49708 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:13.290974   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.291494   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.291524   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.291402   50821 retry.go:31] will retry after 588.738796ms: waiting for machine to come up
	I1024 20:12:13.882058   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.882661   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.882685   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.882577   50821 retry.go:31] will retry after 626.457323ms: waiting for machine to come up
	I1024 20:12:14.510560   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:14.511120   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:14.511159   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:14.511059   50821 retry.go:31] will retry after 848.521213ms: waiting for machine to come up
	I1024 20:12:15.360917   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:15.361423   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:15.361452   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:15.361397   50821 retry.go:31] will retry after 790.780783ms: waiting for machine to come up
	I1024 20:12:16.153815   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:16.154332   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:16.154364   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:16.154274   50821 retry.go:31] will retry after 1.066691012s: waiting for machine to come up
	I1024 20:12:17.222675   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:17.223280   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:17.223309   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:17.223248   50821 retry.go:31] will retry after 1.657285361s: waiting for machine to come up
	I1024 20:12:18.299768   49708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1024 20:12:16.196266   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.197531   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.397703   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:17.907894   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:18.029064   50077 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:18.029174   50077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.029196   50077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.029209   50077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.029219   50077 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.029403   50077 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 20:12:18.029418   50077 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.030719   50077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.030726   50077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.030730   50077 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 20:12:18.030748   50077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.030775   50077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.030801   50077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.030972   50077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.031077   50077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.180435   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.182586   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.185966   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1024 20:12:18.190926   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.196636   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.198176   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.205102   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.285789   50077 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1024 20:12:18.285837   50077 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.285889   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.356595   50077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1024 20:12:18.356639   50077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.356678   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.370773   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.387248   50077 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1024 20:12:18.387295   50077 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.387343   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.387461   50077 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1024 20:12:18.387488   50077 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1024 20:12:18.387530   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400566   50077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1024 20:12:18.400608   50077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.400647   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400660   50077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1024 20:12:18.400705   50077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.400742   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400754   50077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1024 20:12:18.400785   50077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.400812   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400845   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.400814   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.545451   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.545541   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1024 20:12:18.545587   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.545674   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.545724   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.545777   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1024 20:12:18.545734   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1024 20:12:18.683462   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 20:12:18.683513   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1024 20:12:18.683578   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.683656   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1024 20:12:18.683686   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1024 20:12:18.683732   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1024 20:12:18.688916   50077 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1024 20:12:18.688954   50077 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.689040   50077 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1024 20:12:20.355824   50077 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.666754363s)
	I1024 20:12:20.355859   50077 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1024 20:12:20.355920   50077 cache_images.go:92] LoadImages completed in 2.326833316s
	W1024 20:12:20.356004   50077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1024 20:12:20.356080   50077 ssh_runner.go:195] Run: crio config
	I1024 20:12:20.428753   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:20.428775   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:20.428793   50077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:20.428835   50077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467375 NodeName:old-k8s-version-467375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 20:12:20.429015   50077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467375"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467375
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.71:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:20.429115   50077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467375 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:20.429179   50077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1024 20:12:20.440158   50077 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:20.440239   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:20.450883   50077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1024 20:12:20.470913   50077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:20.490653   50077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1024 20:12:20.510287   50077 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:20.514815   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:20.526910   50077 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375 for IP: 192.168.39.71
	I1024 20:12:20.526943   50077 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:20.527172   50077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:20.527227   50077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:20.527313   50077 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.key
	I1024 20:12:20.527401   50077 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key.f4667c0f
	I1024 20:12:20.527458   50077 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key
	I1024 20:12:20.527617   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:20.527658   50077 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:20.527672   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:20.527712   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:20.527768   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:20.527803   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:20.527867   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:20.528563   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:20.561437   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:20.593396   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:20.626812   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 20:12:20.659073   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:20.690934   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:20.723550   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:20.754091   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:20.785078   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:20.813190   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:20.845338   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:20.876594   50077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:20.899560   50077 ssh_runner.go:195] Run: openssl version
	I1024 20:12:20.907482   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:20.922776   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929623   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929693   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.935454   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:20.947494   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:20.958906   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964115   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964177   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.970084   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:20.982477   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:20.995317   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000479   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000568   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.006797   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:21.020161   50077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:21.025037   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:21.033376   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:21.041858   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:21.050119   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:21.058140   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:21.066151   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:21.074299   50077 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:21.074409   50077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:21.074454   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:21.125486   50077 cri.go:89] found id: ""
	I1024 20:12:21.125559   50077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:21.139034   50077 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:21.139058   50077 kubeadm.go:636] restartCluster start
	I1024 20:12:21.139113   50077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:21.151994   50077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.153569   50077 kubeconfig.go:92] found "old-k8s-version-467375" server: "https://192.168.39.71:8443"
	I1024 20:12:21.157114   50077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:21.169908   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.169998   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.186116   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.186138   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.186187   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.201283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.702002   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.702084   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.717499   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.201839   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.201946   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.217814   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.702454   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.702525   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.720944   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:18.882382   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:18.882833   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:18.882869   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:18.882798   50821 retry.go:31] will retry after 1.854607935s: waiting for machine to come up
	I1024 20:12:20.738594   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:20.739327   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:20.739375   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:20.739255   50821 retry.go:31] will retry after 2.774006375s: waiting for machine to come up
	I1024 20:12:18.891092   49708 addons.go:502] enable addons completed in 2.901476764s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1024 20:12:20.898330   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:22.897985   49708 node_ready.go:49] node "default-k8s-diff-port-643126" has status "Ready":"True"
	I1024 20:12:22.898016   49708 node_ready.go:38] duration metric: took 6.51394456s waiting for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:22.898029   49708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:22.907326   49708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915330   49708 pod_ready.go:92] pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:22.915354   49708 pod_ready.go:81] duration metric: took 7.999933ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915366   49708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:20.698011   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.195726   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.201529   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.201620   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.215098   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.701482   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.701572   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.715481   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.201550   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.201610   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.218008   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.701489   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.701591   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.716960   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.201492   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.201558   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.215972   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.701398   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.701506   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.714016   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.201948   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.202018   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.215403   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.701876   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.701948   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.714598   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.202095   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.202161   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.215728   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.702476   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.702589   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.715925   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.514310   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:23.514813   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:23.514850   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:23.514763   50821 retry.go:31] will retry after 3.277478612s: waiting for machine to come up
	I1024 20:12:26.793845   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:26.794291   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:26.794312   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:26.794249   50821 retry.go:31] will retry after 4.518205069s: waiting for machine to come up
	I1024 20:12:24.934951   49708 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.934977   49708 pod_ready.go:81] duration metric: took 2.019602232s waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.934990   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940403   49708 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.940424   49708 pod_ready.go:81] duration metric: took 5.425415ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940437   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805106   49708 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:25.805127   49708 pod_ready.go:81] duration metric: took 864.682784ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805137   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.096987   49708 pod_ready.go:92] pod "kube-proxy-x4zbh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.097025   49708 pod_ready.go:81] duration metric: took 291.86715ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.097040   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497404   49708 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.497425   49708 pod_ready.go:81] duration metric: took 400.376909ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497444   49708 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.694439   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.192955   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.201919   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.201990   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.215407   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:28.701578   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.701658   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.714135   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.202433   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.202553   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.214936   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.702439   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.702499   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.714852   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.202428   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.202500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.214283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.702441   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.702500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.715562   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:31.170652   50077 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:31.170682   50077 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:31.170693   50077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:31.170772   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:31.231971   50077 cri.go:89] found id: ""
	I1024 20:12:31.232068   50077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:31.249451   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:31.261057   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:31.261124   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270878   50077 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270901   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:31.407803   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.357283   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.567466   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.659297   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.745553   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:32.745629   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:32.761052   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:31.314269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314887   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has current primary IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314912   49071 main.go:141] libmachine: (no-preload-014826) Found IP for machine: 192.168.50.162
	I1024 20:12:31.314926   49071 main.go:141] libmachine: (no-preload-014826) Reserving static IP address...
	I1024 20:12:31.315396   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.315434   49071 main.go:141] libmachine: (no-preload-014826) DBG | skip adding static IP to network mk-no-preload-014826 - found existing host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"}
	I1024 20:12:31.315448   49071 main.go:141] libmachine: (no-preload-014826) Reserved static IP address: 192.168.50.162
	I1024 20:12:31.315465   49071 main.go:141] libmachine: (no-preload-014826) Waiting for SSH to be available...
	I1024 20:12:31.315483   49071 main.go:141] libmachine: (no-preload-014826) DBG | Getting to WaitForSSH function...
	I1024 20:12:31.318209   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318611   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.318653   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318819   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH client type: external
	I1024 20:12:31.318871   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa (-rw-------)
	I1024 20:12:31.318916   49071 main.go:141] libmachine: (no-preload-014826) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:31.318941   49071 main.go:141] libmachine: (no-preload-014826) DBG | About to run SSH command:
	I1024 20:12:31.318957   49071 main.go:141] libmachine: (no-preload-014826) DBG | exit 0
	I1024 20:12:31.414054   49071 main.go:141] libmachine: (no-preload-014826) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:31.414566   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetConfigRaw
	I1024 20:12:31.415326   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.418120   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418549   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.418582   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418808   49071 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/config.json ...
	I1024 20:12:31.419009   49071 machine.go:88] provisioning docker machine ...
	I1024 20:12:31.419033   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:31.419222   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419399   49071 buildroot.go:166] provisioning hostname "no-preload-014826"
	I1024 20:12:31.419423   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419578   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.421861   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422241   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.422273   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422501   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.422676   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.422847   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.423066   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.423250   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.423707   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.423724   49071 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-014826 && echo "no-preload-014826" | sudo tee /etc/hostname
	I1024 20:12:31.557472   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-014826
	
	I1024 20:12:31.557504   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.560529   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.560928   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.560979   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.561201   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.561457   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561654   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561817   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.561968   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.562329   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.562357   49071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014826/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:31.694896   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:31.694927   49071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:31.694948   49071 buildroot.go:174] setting up certificates
	I1024 20:12:31.694959   49071 provision.go:83] configureAuth start
	I1024 20:12:31.694967   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.695264   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.697858   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698148   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.698176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698357   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.700982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701332   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.701364   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701570   49071 provision.go:138] copyHostCerts
	I1024 20:12:31.701625   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:31.701642   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:31.701733   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:31.701845   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:31.701857   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:31.701883   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:31.701947   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:31.701956   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:31.701978   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:31.702043   49071 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.no-preload-014826 san=[192.168.50.162 192.168.50.162 localhost 127.0.0.1 minikube no-preload-014826]
	I1024 20:12:31.798568   49071 provision.go:172] copyRemoteCerts
	I1024 20:12:31.798622   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:31.798642   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.801859   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802237   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.802269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802465   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.802672   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.802867   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.803027   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:31.891633   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:31.916451   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 20:12:31.937924   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:12:31.961360   49071 provision.go:86] duration metric: configureAuth took 266.390893ms
	I1024 20:12:31.961384   49071 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:31.961573   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:31.961660   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.964354   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964662   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.964719   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.965002   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965170   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965329   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.965516   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.965961   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.965983   49071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:32.275884   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:32.275911   49071 machine.go:91] provisioned docker machine in 856.887593ms
	I1024 20:12:32.275923   49071 start.go:300] post-start starting for "no-preload-014826" (driver="kvm2")
	I1024 20:12:32.275935   49071 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:32.275957   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.276268   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:32.276298   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.279248   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279642   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.279678   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.279985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.280182   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.280455   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.371931   49071 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:32.375989   49071 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:32.376009   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:32.376077   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:32.376173   49071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:32.376295   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:32.385018   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:32.408697   49071 start.go:303] post-start completed in 132.759815ms
	I1024 20:12:32.408719   49071 fix.go:56] fixHost completed within 21.530244363s
	I1024 20:12:32.408744   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.411800   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412155   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.412189   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412363   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.412574   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412741   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412916   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.413083   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:32.413469   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:32.413483   49071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:32.534092   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178352.477877903
	
	I1024 20:12:32.534116   49071 fix.go:206] guest clock: 1698178352.477877903
	I1024 20:12:32.534127   49071 fix.go:219] Guest: 2023-10-24 20:12:32.477877903 +0000 UTC Remote: 2023-10-24 20:12:32.408724059 +0000 UTC m=+364.183674654 (delta=69.153844ms)
	I1024 20:12:32.534153   49071 fix.go:190] guest clock delta is within tolerance: 69.153844ms
	I1024 20:12:32.534159   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 21.655714466s
	I1024 20:12:32.534185   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.534468   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:32.537523   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.537932   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.537961   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.538160   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538690   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538919   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.539004   49071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:32.539089   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.539138   49071 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:32.539166   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.542176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542308   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542652   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542689   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542714   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542732   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542981   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.542985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.543207   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543214   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543387   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543429   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543573   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.543579   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.631242   49071 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:32.657695   49071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:32.808471   49071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:32.815640   49071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:32.815712   49071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:32.830198   49071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:32.830219   49071 start.go:472] detecting cgroup driver to use...
	I1024 20:12:32.830295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:32.845231   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:32.863283   49071 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:32.863328   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:32.878295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:32.894182   49071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:33.024491   49071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:33.156548   49071 docker.go:214] disabling docker service ...
	I1024 20:12:33.156621   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:33.169940   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:33.182368   49071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:28.804366   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.806145   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.806217   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.193022   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.195173   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:33.297156   49071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:33.434526   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:33.453482   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:33.471594   49071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:12:33.471665   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.481491   49071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:33.481563   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.490505   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.500003   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.509825   49071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:33.524014   49071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:33.532876   49071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:33.532936   49071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:33.545922   49071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:33.554519   49071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:33.661858   49071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:33.867286   49071 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:33.867361   49071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:33.873180   49071 start.go:540] Will wait 60s for crictl version
	I1024 20:12:33.873259   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:33.877238   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:33.918479   49071 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:33.918624   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:33.970986   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:34.026667   49071 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:12:33.278190   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:33.777448   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.277381   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.320204   50077 api_server.go:72] duration metric: took 1.574651034s to wait for apiserver process to appear ...
	I1024 20:12:34.320230   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:34.320258   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.320744   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.320773   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.321162   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.821724   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.028144   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:34.031311   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031699   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:34.031733   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031888   49071 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:34.036386   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:34.052307   49071 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:12:34.052360   49071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:34.099209   49071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:12:34.099236   49071 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:34.099291   49071 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.099414   49071 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.099497   49071 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 20:12:34.099512   49071 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.099547   49071 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.099575   49071 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101069   49071 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.101083   49071 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.101096   49071 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 20:12:34.101077   49071 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101135   49071 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.101147   49071 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.101173   49071 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.101428   49071 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.283586   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.292930   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.294280   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.303296   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1024 20:12:34.314337   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.323356   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.327726   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.373724   49071 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1024 20:12:34.373774   49071 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.373819   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.466499   49071 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1024 20:12:34.466540   49071 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.466582   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.487167   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.489929   49071 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1024 20:12:34.489986   49071 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.490027   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588137   49071 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1024 20:12:34.588178   49071 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.588206   49071 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1024 20:12:34.588231   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588248   49071 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.588286   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588308   49071 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1024 20:12:34.588330   49071 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.588340   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.588358   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588388   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.588410   49071 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1024 20:12:34.588427   49071 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.588447   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588448   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.605099   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.693897   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.694097   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1024 20:12:34.694204   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.707142   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.707184   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.707265   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1024 20:12:34.707388   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:34.707384   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1024 20:12:34.707516   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:34.722106   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1024 20:12:34.722205   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:34.776997   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1024 20:12:34.777019   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777067   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777089   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1024 20:12:34.777180   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:34.804122   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1024 20:12:34.804241   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:34.814486   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 20:12:34.814532   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1024 20:12:34.814567   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1024 20:12:34.814607   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1024 20:12:34.814634   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:38.115460   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (3.338366217s)
	I1024 20:12:38.115492   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1024 20:12:38.115516   49071 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115548   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (3.338341429s)
	I1024 20:12:38.115570   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115586   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1024 20:12:38.115618   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3: (3.311351093s)
	I1024 20:12:38.115644   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1024 20:12:38.115650   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.30100028s)
	I1024 20:12:38.115665   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1024 20:12:34.807460   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.307370   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:34.696540   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.192160   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.822511   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1024 20:12:39.822561   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.734083   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.734125   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.734161   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.777985   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.778037   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.822134   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.042292   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.042343   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.321887   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.363625   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.363682   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.821995   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.828080   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.828114   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:42.321381   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:42.331626   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:12:42.342584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:12:42.342614   50077 api_server.go:131] duration metric: took 8.022377051s to wait for apiserver health ...
	I1024 20:12:42.342626   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:42.342634   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:42.344676   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:42.346118   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:42.363399   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:42.389481   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:42.403326   50077 system_pods.go:59] 7 kube-system pods found
	I1024 20:12:42.403370   50077 system_pods.go:61] "coredns-5644d7b6d9-x567q" [1dc7f1c2-4997-4330-a9bc-b914b1c1db9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:42.403381   50077 system_pods.go:61] "etcd-old-k8s-version-467375" [62c8ab28-033f-43fa-96b2-e127d8d46730] Running
	I1024 20:12:42.403389   50077 system_pods.go:61] "kube-apiserver-old-k8s-version-467375" [87c58a79-9f12-4be3-a450-69aa22674541] Running
	I1024 20:12:42.403398   50077 system_pods.go:61] "kube-controller-manager-old-k8s-version-467375" [6bf66f9f-1431-4b3f-b186-528945c54a63] Running
	I1024 20:12:42.403412   50077 system_pods.go:61] "kube-proxy-jdvck" [d35f42b9-9be8-43ee-8434-3d557e31bfde] Running
	I1024 20:12:42.403418   50077 system_pods.go:61] "kube-scheduler-old-k8s-version-467375" [63ae0d31-ace3-4490-a2e8-ed110e3a1072] Running
	I1024 20:12:42.403424   50077 system_pods.go:61] "storage-provisioner" [9105f8d8-3aa1-422d-acf2-9f83e9ede8af] Running
	I1024 20:12:42.403431   50077 system_pods.go:74] duration metric: took 13.927429ms to wait for pod list to return data ...
	I1024 20:12:42.403440   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:42.408844   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:42.408890   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:42.408905   50077 node_conditions.go:105] duration metric: took 5.459392ms to run NodePressure ...
	I1024 20:12:42.408926   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:42.701645   50077 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:42.707084   50077 retry.go:31] will retry after 366.455415ms: kubelet not initialised
	I1024 20:12:39.807495   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:42.306172   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.193434   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:41.195135   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.694847   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.078083   50077 retry.go:31] will retry after 411.231242ms: kubelet not initialised
	I1024 20:12:43.494711   50077 retry.go:31] will retry after 768.972767ms: kubelet not initialised
	I1024 20:12:44.268690   50077 retry.go:31] will retry after 693.655783ms: kubelet not initialised
	I1024 20:12:45.186580   50077 retry.go:31] will retry after 1.610937297s: kubelet not initialised
	I1024 20:12:46.803897   50077 retry.go:31] will retry after 959.133509ms: kubelet not initialised
	I1024 20:12:47.768260   50077 retry.go:31] will retry after 1.51466069s: kubelet not initialised
	I1024 20:12:45.464752   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.34915976s)
	I1024 20:12:45.464779   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1024 20:12:45.464821   49071 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:45.464899   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:46.936699   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.471766425s)
	I1024 20:12:46.936725   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1024 20:12:46.936750   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:46.936790   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:44.806094   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:46.807137   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:45.696196   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:48.192732   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:49.288179   50077 retry.go:31] will retry after 5.048749504s: kubelet not initialised
	I1024 20:12:49.615688   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.678859869s)
	I1024 20:12:49.615726   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1024 20:12:49.615763   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:49.615840   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:51.387159   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.771279542s)
	I1024 20:12:51.387185   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1024 20:12:51.387209   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:51.387258   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:52.868127   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.480840395s)
	I1024 20:12:52.868158   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1024 20:12:52.868184   49071 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:52.868233   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:49.304156   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:51.305456   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:53.307726   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:50.195756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:52.196133   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.342759   50077 retry.go:31] will retry after 8.402807892s: kubelet not initialised
	I1024 20:12:53.617841   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1024 20:12:53.617883   49071 cache_images.go:123] Successfully loaded all cached images
	I1024 20:12:53.617889   49071 cache_images.go:92] LoadImages completed in 19.518639759s
	I1024 20:12:53.617972   49071 ssh_runner.go:195] Run: crio config
	I1024 20:12:53.677157   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:12:53.677181   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:53.677198   49071 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:53.677215   49071 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.162 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014826 NodeName:no-preload-014826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:12:53.677386   49071 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014826"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:53.677482   49071 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-014826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:53.677552   49071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:12:53.688840   49071 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:53.688904   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:53.700095   49071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:12:53.717176   49071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:53.737316   49071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1024 20:12:53.756100   49071 ssh_runner.go:195] Run: grep 192.168.50.162	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:53.760013   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:53.771571   49071 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826 for IP: 192.168.50.162
	I1024 20:12:53.771601   49071 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:53.771752   49071 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:53.771811   49071 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:53.771896   49071 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.key
	I1024 20:12:53.771975   49071 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key.1b8245f8
	I1024 20:12:53.772056   49071 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key
	I1024 20:12:53.772205   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:53.772250   49071 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:53.772262   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:53.772303   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:53.772333   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:53.772354   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:53.772397   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:53.773081   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:53.797387   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:53.822084   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:53.846401   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:12:53.869361   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:53.891519   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:53.914051   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:53.935925   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:53.958389   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:53.982011   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:54.005921   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:54.029793   49071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:54.047319   49071 ssh_runner.go:195] Run: openssl version
	I1024 20:12:54.053493   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:54.064414   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069060   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069115   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.075137   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:54.088046   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:54.099949   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104810   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104867   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.110617   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:54.122160   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:54.133062   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137858   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137922   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.144146   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:54.155998   49071 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:54.160989   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:54.167441   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:54.173797   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:54.180320   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:54.186876   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:54.193624   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:54.200066   49071 kubeadm.go:404] StartCluster: {Name:no-preload-014826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:54.200165   49071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:54.200202   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:54.253207   49071 cri.go:89] found id: ""
	I1024 20:12:54.253267   49071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:54.264316   49071 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:54.264348   49071 kubeadm.go:636] restartCluster start
	I1024 20:12:54.264404   49071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:54.276382   49071 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.277506   49071 kubeconfig.go:92] found "no-preload-014826" server: "https://192.168.50.162:8443"
	I1024 20:12:54.279888   49071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:54.290005   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.290052   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.302383   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.302400   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.302447   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.315130   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.815483   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.815574   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.827862   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.315372   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.315430   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.328409   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.816079   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.816141   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.829755   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.315782   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.315869   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.329006   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.815526   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.815621   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.828167   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.315692   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.315781   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.328590   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.816175   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.816250   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.832014   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.805830   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.810013   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.692702   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.192210   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.750533   50077 retry.go:31] will retry after 7.667287878s: kubelet not initialised
	I1024 20:12:58.315841   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.315922   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.329743   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:58.815711   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.815779   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.828215   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.315817   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.315924   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.328911   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.815493   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.815583   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.829684   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.316215   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.316294   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.330227   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.815830   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.815901   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.828290   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.315228   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.315319   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.329972   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.815426   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.815495   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.829199   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.315754   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.315834   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.328463   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.816091   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.816175   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.830548   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.304116   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.304336   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:59.193761   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:01.692343   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.693961   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.315186   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.315249   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.327729   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:03.815302   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.815389   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.827308   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:04.290952   49071 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:13:04.290993   49071 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:13:04.291005   49071 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:13:04.291078   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:13:04.333468   49071 cri.go:89] found id: ""
	I1024 20:13:04.333543   49071 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:13:04.351889   49071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:13:04.362176   49071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:13:04.362251   49071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372650   49071 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372683   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:04.495803   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.080838   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.290640   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.379839   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.458741   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:13:05.458843   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.475039   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.997438   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.496596   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.996587   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.496933   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.514268   49071 api_server.go:72] duration metric: took 2.055524654s to wait for apiserver process to appear ...
	I1024 20:13:07.514294   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:13:07.514310   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.514802   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:07.514840   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.515243   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:08.015912   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:04.306097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:06.805484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:05.698099   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:08.196336   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.424613   50077 retry.go:31] will retry after 17.161095389s: kubelet not initialised
	I1024 20:13:12.512885   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.512923   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.512936   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.564368   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.564415   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.564435   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.578188   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.578210   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:13.015415   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.022900   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.022939   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:09.305906   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:11.805107   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.693989   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:12.696233   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:13.515731   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.520510   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.520565   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:14.015693   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:14.021308   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:13:14.029247   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:13:14.029271   49071 api_server.go:131] duration metric: took 6.514969351s to wait for apiserver health ...
	I1024 20:13:14.029281   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:13:14.029289   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:13:14.031023   49071 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:13:14.032390   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:13:14.042542   49071 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:13:14.061827   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:13:14.077006   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:13:14.077041   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:13:14.077058   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:13:14.077068   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:13:14.077078   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:13:14.077088   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:13:14.077102   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:13:14.077114   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:13:14.077125   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:13:14.077140   49071 system_pods.go:74] duration metric: took 15.296766ms to wait for pod list to return data ...
	I1024 20:13:14.077150   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:13:14.080871   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:13:14.080896   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:13:14.080908   49071 node_conditions.go:105] duration metric: took 3.7473ms to run NodePressure ...
	I1024 20:13:14.080921   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:14.292868   49071 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297583   49071 kubeadm.go:787] kubelet initialised
	I1024 20:13:14.297611   49071 kubeadm.go:788] duration metric: took 4.717728ms waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297621   49071 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:14.303742   49071 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.309570   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309600   49071 pod_ready.go:81] duration metric: took 5.835917ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.309608   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309616   49071 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.316423   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316453   49071 pod_ready.go:81] duration metric: took 6.829373ms waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.316577   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316593   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.325238   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325271   49071 pod_ready.go:81] duration metric: took 8.669582ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.325280   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325288   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.466293   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466319   49071 pod_ready.go:81] duration metric: took 141.023699ms waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.466331   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466342   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.865820   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865855   49071 pod_ready.go:81] duration metric: took 399.504017ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.865867   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865876   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.266786   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266820   49071 pod_ready.go:81] duration metric: took 400.936146ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.266833   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266844   49071 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.666547   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666582   49071 pod_ready.go:81] duration metric: took 399.72944ms waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.666596   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666617   49071 pod_ready.go:38] duration metric: took 1.368975115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:15.666636   49071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:13:15.686675   49071 ops.go:34] apiserver oom_adj: -16
	I1024 20:13:15.686696   49071 kubeadm.go:640] restartCluster took 21.422341568s
	I1024 20:13:15.686706   49071 kubeadm.go:406] StartCluster complete in 21.486646231s
	I1024 20:13:15.686737   49071 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.686823   49071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:13:15.688903   49071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.689192   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:13:15.689321   49071 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:13:15.689405   49071 addons.go:69] Setting storage-provisioner=true in profile "no-preload-014826"
	I1024 20:13:15.689423   49071 addons.go:231] Setting addon storage-provisioner=true in "no-preload-014826"
	I1024 20:13:15.689462   49071 addons.go:69] Setting metrics-server=true in profile "no-preload-014826"
	I1024 20:13:15.689490   49071 addons.go:231] Setting addon metrics-server=true in "no-preload-014826"
	W1024 20:13:15.689512   49071 addons.go:240] addon metrics-server should already be in state true
	I1024 20:13:15.689560   49071 host.go:66] Checking if "no-preload-014826" exists ...
	W1024 20:13:15.689463   49071 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:13:15.689649   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.689445   49071 addons.go:69] Setting default-storageclass=true in profile "no-preload-014826"
	I1024 20:13:15.689716   49071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014826"
	I1024 20:13:15.689431   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:13:15.690018   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690051   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690060   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690086   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690173   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690225   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.695832   49071 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-014826" context rescaled to 1 replicas
	I1024 20:13:15.695868   49071 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:13:15.698104   49071 out.go:177] * Verifying Kubernetes components...
	I1024 20:13:15.701812   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:13:15.708637   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I1024 20:13:15.709086   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.709579   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I1024 20:13:15.709941   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.709959   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710044   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.710478   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.710629   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.710640   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710943   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.710954   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711125   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.711367   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1024 20:13:15.711702   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.711739   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711852   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.712441   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.712453   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.713081   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.713312   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.717141   49071 addons.go:231] Setting addon default-storageclass=true in "no-preload-014826"
	W1024 20:13:15.717173   49071 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:13:15.717201   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.717655   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.717688   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.729423   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I1024 20:13:15.730145   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.730747   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.730763   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.730811   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1024 20:13:15.731224   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.731294   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.731487   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.731691   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.731704   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.732239   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.732712   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.733909   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736374   49071 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:13:15.734682   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736231   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1024 20:13:15.738165   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:13:15.738178   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:13:15.738198   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739819   49071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:13:15.741717   49071 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.741733   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:13:15.741752   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739693   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.742202   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.742374   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.742389   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.742978   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.743000   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.743088   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.743253   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.743408   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.743896   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.744551   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.745028   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.745145   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745266   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.745462   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.745486   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745735   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.745870   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.745956   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.746023   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.782650   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I1024 20:13:15.783126   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.783699   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.783721   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.784051   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.784270   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.786114   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.786409   49071 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.786424   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:13:15.786439   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.788982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789347   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.789376   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789622   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.789838   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.790047   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.790195   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.870753   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:13:15.870771   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:13:15.893772   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:13:15.893799   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:13:15.916179   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.928570   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.928596   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:13:15.950610   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.987129   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.987945   49071 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:13:15.987993   49071 node_ready.go:35] waiting up to 6m0s for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53431699s)
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.499892733s)
	I1024 20:13:17.450586   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450597   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.450609   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450621   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451126   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451143   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451152   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451160   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451176   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451180   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451186   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451190   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451200   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451211   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451380   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451410   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451415   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451429   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451430   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451442   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.464276   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.464297   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.464561   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.464578   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.464585   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626276   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.639098267s)
	I1024 20:13:17.626344   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626364   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.626686   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.626711   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.626713   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626765   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626779   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.627054   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.627071   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.627082   49071 addons.go:467] Verifying addon metrics-server=true in "no-preload-014826"
	I1024 20:13:17.629289   49071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:13:17.630781   49071 addons.go:502] enable addons completed in 1.94145774s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:13:18.084997   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:13.805526   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.807970   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:18.305400   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.194668   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:17.694096   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:20.085063   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:22.086260   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:23.087300   49071 node_ready.go:49] node "no-preload-014826" has status "Ready":"True"
	I1024 20:13:23.087338   49071 node_ready.go:38] duration metric: took 7.0993157s waiting for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:23.087350   49071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:23.093785   49071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101553   49071 pod_ready.go:92] pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:23.101576   49071 pod_ready.go:81] duration metric: took 7.766543ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101588   49071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:20.808097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:23.306150   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:19.696002   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:22.195097   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.592041   50077 kubeadm.go:787] kubelet initialised
	I1024 20:13:27.592064   50077 kubeadm.go:788] duration metric: took 44.890387595s waiting for restarted kubelet to initialise ...
	I1024 20:13:27.592071   50077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:27.596611   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601949   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.601972   50077 pod_ready.go:81] duration metric: took 5.342417ms waiting for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601979   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607096   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.607118   50077 pod_ready.go:81] duration metric: took 5.132259ms waiting for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607130   50077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.611971   50077 pod_ready.go:92] pod "etcd-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.611991   50077 pod_ready.go:81] duration metric: took 4.854068ms waiting for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.612002   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.616975   50077 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.616995   50077 pod_ready.go:81] duration metric: took 4.985984ms waiting for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.617006   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620272   49071 pod_ready.go:92] pod "etcd-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.620294   49071 pod_ready.go:81] duration metric: took 1.518699618s waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620304   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625954   49071 pod_ready.go:92] pod "kube-apiserver-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.625975   49071 pod_ready.go:81] duration metric: took 5.666043ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625985   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096309   49071 pod_ready.go:92] pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.096338   49071 pod_ready.go:81] duration metric: took 2.470345358s waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096363   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101417   49071 pod_ready.go:92] pod "kube-proxy-hvphg" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.101439   49071 pod_ready.go:81] duration metric: took 5.060638ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101457   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487627   49071 pod_ready.go:92] pod "kube-scheduler-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.487655   49071 pod_ready.go:81] duration metric: took 386.189892ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487668   49071 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:25.805375   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:28.304314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:24.199489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:26.694339   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.990781   50077 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.990808   50077 pod_ready.go:81] duration metric: took 373.794401ms waiting for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.990817   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389532   50077 pod_ready.go:92] pod "kube-proxy-jdvck" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.389554   50077 pod_ready.go:81] duration metric: took 398.730628ms waiting for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389562   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791217   50077 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.791245   50077 pod_ready.go:81] duration metric: took 401.675656ms waiting for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791259   50077 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:31.101273   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.797752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.294823   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:30.305423   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.804966   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.196181   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:31.694405   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:33.597846   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.098571   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.295326   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.295502   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:35.307544   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:37.804734   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.193583   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.194545   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.693640   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.598114   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.295582   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.797360   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.303674   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:42.305932   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:41.193409   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.694630   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.097684   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.599550   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.295412   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.295801   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.795437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:44.806885   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.305513   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.695737   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.194597   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.098390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.098465   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.598464   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.796354   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.296299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.806019   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.304671   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.692678   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.693810   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.099808   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.596982   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.795042   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.795788   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.305480   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.805003   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.192666   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.192992   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.598091   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:02.097277   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.296748   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.799381   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.304665   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.305140   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.193682   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.694286   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.098871   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.598019   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.297114   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.796174   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:03.804391   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:05.805262   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.304535   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.692751   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.693756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.598278   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.598744   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:09.296355   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.794188   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.805023   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.304639   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.193179   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.696086   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.097069   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.598606   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.795184   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.797064   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.804980   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.304229   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:16.193316   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.193452   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.099418   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.597767   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.598478   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.294610   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.295299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.295580   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.304386   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.304955   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.693442   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.695298   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.598688   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.098094   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.796039   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.294583   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.804411   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:26.805975   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:25.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.194309   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.098448   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.597809   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.295004   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.296770   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.302945   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.303224   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.305333   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.693713   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.693887   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.695638   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.599337   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.098527   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.795335   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.796128   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.798347   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.307171   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.806058   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.192382   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.195932   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.098830   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.598203   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.295075   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.796827   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.304919   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.805069   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.693934   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.694102   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.598267   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.097792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:45.297437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.795616   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.805647   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:46.806849   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.695195   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.194156   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.597390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.099367   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:50.294686   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.297230   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.306571   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.804484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.194481   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.693650   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.694257   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.597760   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.597897   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.794752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.795666   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.805053   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.303997   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.304326   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.693506   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.098488   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.098937   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.297834   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.795492   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.305557   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:02.805113   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.694107   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.194559   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.597853   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.598764   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.798231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.296567   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:04.805204   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.806277   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.693959   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.194793   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.098369   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.099343   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.597632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.795941   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.295163   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:09.303880   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.308399   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.692947   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.694115   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.098788   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.297546   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.799219   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.804941   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.805508   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.805620   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.194071   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.692344   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.099461   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.598528   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:18.294855   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.795197   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.303894   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.807109   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:19.693273   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:21.694158   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.694489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:24.598739   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.610829   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.295231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.296151   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.794796   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.304009   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.304056   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:28.692475   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.097722   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.099314   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.795050   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.795981   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.304915   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.306232   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:30.693731   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.193919   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.100924   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.597972   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:37.598135   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:34.295967   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.297180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.809488   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.305924   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.696190   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.193380   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:42.597443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.794953   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.794982   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.806251   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:41.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.694041   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.192299   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:44.598402   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.097519   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.294813   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.297991   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.794454   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.803978   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.804440   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.805016   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.192754   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.693494   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.098171   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.598327   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.795988   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.296853   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.806503   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.807986   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:50.193124   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.692831   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.097085   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.600496   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.795189   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.795825   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.304728   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.305314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.696873   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:57.193194   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.098128   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.099894   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.295180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.295325   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:58.804230   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:00.804430   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.303762   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.193752   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.194280   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.694730   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.597363   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.598434   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.599790   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.295998   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.298356   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.795402   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.305076   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.805412   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:04.884378   49198 pod_ready.go:81] duration metric: took 4m0.000380407s waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:04.884408   49198 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:04.884437   49198 pod_ready.go:38] duration metric: took 4m3.201253081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:04.884459   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:04.884488   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:04.884542   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:04.941853   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:04.941878   49198 cri.go:89] found id: ""
	I1024 20:16:04.941889   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:04.941963   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.947250   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:04.947317   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:04.990126   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:04.990151   49198 cri.go:89] found id: ""
	I1024 20:16:04.990163   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:04.990226   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.995026   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:04.995086   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:05.045422   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.045441   49198 cri.go:89] found id: ""
	I1024 20:16:05.045449   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:05.045505   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.049931   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:05.049997   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:05.115746   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.115767   49198 cri.go:89] found id: ""
	I1024 20:16:05.115775   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:05.115822   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.120476   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:05.120527   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:05.163487   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.163509   49198 cri.go:89] found id: ""
	I1024 20:16:05.163521   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:05.163580   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.167956   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:05.168027   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:05.209375   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.209403   49198 cri.go:89] found id: ""
	I1024 20:16:05.209412   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:05.209468   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.213932   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:05.213994   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:05.256033   49198 cri.go:89] found id: ""
	I1024 20:16:05.256055   49198 logs.go:284] 0 containers: []
	W1024 20:16:05.256070   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:05.256077   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:05.256130   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:05.313137   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.313163   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:05.313171   49198 cri.go:89] found id: ""
	I1024 20:16:05.313181   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:05.313236   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.319603   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.324116   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:05.324138   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.364879   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:05.364905   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.430314   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:05.430342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:05.488524   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:05.488550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:05.547000   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:05.547029   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:05.561360   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:05.561392   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:05.616215   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:05.616254   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.666923   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:05.666955   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.707305   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:05.707332   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:05.865943   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:05.865972   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.914044   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:05.914070   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:06.370658   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:06.370692   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:06.423891   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:06.423919   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:10.098187   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:12.597089   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.796035   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.796300   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.805755   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.806246   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:08.967015   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:08.982371   49198 api_server.go:72] duration metric: took 4m12.675281905s to wait for apiserver process to appear ...
	I1024 20:16:08.982397   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:08.982431   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:08.982492   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:09.023557   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:09.023575   49198 cri.go:89] found id: ""
	I1024 20:16:09.023582   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:09.023626   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.029901   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:09.029954   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:09.066141   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.066169   49198 cri.go:89] found id: ""
	I1024 20:16:09.066181   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:09.066232   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.071099   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:09.071161   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:09.117898   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:09.117917   49198 cri.go:89] found id: ""
	I1024 20:16:09.117927   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:09.117979   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.122675   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:09.122729   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:09.162628   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.162647   49198 cri.go:89] found id: ""
	I1024 20:16:09.162656   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:09.162711   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.166799   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:09.166859   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:09.203866   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.203894   49198 cri.go:89] found id: ""
	I1024 20:16:09.203904   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:09.203968   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.208141   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:09.208201   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:09.252432   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.252449   49198 cri.go:89] found id: ""
	I1024 20:16:09.252457   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:09.252519   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.257709   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:09.257767   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:09.312883   49198 cri.go:89] found id: ""
	I1024 20:16:09.312908   49198 logs.go:284] 0 containers: []
	W1024 20:16:09.312919   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:09.312926   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:09.312984   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:09.365111   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:09.365138   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.365145   49198 cri.go:89] found id: ""
	I1024 20:16:09.365155   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:09.365215   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.370442   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.375055   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:09.375082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.440328   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:09.440361   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.489007   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:09.489035   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:09.539429   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:09.539467   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:09.591012   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:09.591049   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:09.608336   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:09.608362   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.656190   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:09.656216   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.704915   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:09.704942   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.743847   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:09.743878   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:10.154301   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:10.154342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:10.296525   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:10.296552   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:10.347731   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:10.347763   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:10.388130   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:10.388157   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:12.931381   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:16:12.938286   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:16:12.940208   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:12.940228   49198 api_server.go:131] duration metric: took 3.957823811s to wait for apiserver health ...
	I1024 20:16:12.940236   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:12.940255   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:12.940311   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:12.985630   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:12.985654   49198 cri.go:89] found id: ""
	I1024 20:16:12.985664   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:12.985736   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:12.991021   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:12.991094   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:13.031617   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:13.031638   49198 cri.go:89] found id: ""
	I1024 20:16:13.031647   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:13.031690   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.036956   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:13.037010   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:13.074663   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:13.074683   49198 cri.go:89] found id: ""
	I1024 20:16:13.074692   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:13.074745   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.079061   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:13.079115   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:13.122923   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.122947   49198 cri.go:89] found id: ""
	I1024 20:16:13.122957   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:13.123010   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.126914   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:13.126987   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:13.174746   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:13.174781   49198 cri.go:89] found id: ""
	I1024 20:16:13.174791   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:13.174867   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.179817   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:13.179884   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:13.228560   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.228588   49198 cri.go:89] found id: ""
	I1024 20:16:13.228606   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:13.228661   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.233182   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:13.233247   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:13.272072   49198 cri.go:89] found id: ""
	I1024 20:16:13.272100   49198 logs.go:284] 0 containers: []
	W1024 20:16:13.272110   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:13.272117   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:13.272174   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:13.317104   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:13.317129   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:13.317137   49198 cri.go:89] found id: ""
	I1024 20:16:13.317148   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:13.317208   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.327265   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.331706   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:13.331730   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.378259   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:13.378299   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:13.402257   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:13.402289   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:13.465655   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:13.465685   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.521268   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:13.521312   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:13.923501   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:13.923550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:13.976055   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:13.976082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:14.028953   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:14.028985   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:14.069859   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:14.069887   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:14.196920   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:14.196959   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:14.257588   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:14.257617   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:14.302980   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:14.303019   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:14.344441   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:14.344469   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:16.893365   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:16.893395   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.893404   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.893412   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.893419   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.893426   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.893433   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.893444   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.893456   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.893469   49198 system_pods.go:74] duration metric: took 3.953227014s to wait for pod list to return data ...
	I1024 20:16:16.893483   49198 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:16.895879   49198 default_sa.go:45] found service account: "default"
	I1024 20:16:16.895896   49198 default_sa.go:55] duration metric: took 2.405313ms for default service account to be created ...
	I1024 20:16:16.895903   49198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:16.902189   49198 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:16.902217   49198 system_pods.go:89] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.902225   49198 system_pods.go:89] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.902232   49198 system_pods.go:89] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.902240   49198 system_pods.go:89] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.902246   49198 system_pods.go:89] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.902253   49198 system_pods.go:89] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.902269   49198 system_pods.go:89] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.902281   49198 system_pods.go:89] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.902292   49198 system_pods.go:126] duration metric: took 6.383517ms to wait for k8s-apps to be running ...
	I1024 20:16:16.902303   49198 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:16.902359   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:16.920015   49198 system_svc.go:56] duration metric: took 17.706073ms WaitForService to wait for kubelet.
	I1024 20:16:16.920039   49198 kubeadm.go:581] duration metric: took 4m20.612955305s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:16.920063   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:16.924147   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:16.924167   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:16.924177   49198 node_conditions.go:105] duration metric: took 4.109839ms to run NodePressure ...
	I1024 20:16:16.924187   49198 start.go:228] waiting for startup goroutines ...
	I1024 20:16:16.924194   49198 start.go:233] waiting for cluster config update ...
	I1024 20:16:16.924206   49198 start.go:242] writing updated cluster config ...
	I1024 20:16:16.924490   49198 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:16.973588   49198 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:16.975639   49198 out.go:177] * Done! kubectl is now configured to use "embed-certs-867165" cluster and "default" namespace by default
	I1024 20:16:14.597646   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.598202   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.296652   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.795527   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.304610   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.305225   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.598694   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.099076   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.295897   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.804148   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:20.805158   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.598167   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.598533   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.598810   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.794690   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.796011   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.798006   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.803034   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:26.497612   49708 pod_ready.go:81] duration metric: took 4m0.000149915s waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:26.497657   49708 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:26.497666   49708 pod_ready.go:38] duration metric: took 4m3.599625321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:26.497682   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:26.497709   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:26.497757   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:26.569452   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:26.569479   49708 cri.go:89] found id: ""
	I1024 20:16:26.569489   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:26.569551   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.573824   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:26.573872   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:26.618910   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:26.618939   49708 cri.go:89] found id: ""
	I1024 20:16:26.618946   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:26.618998   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.623675   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:26.623723   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:26.671601   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:26.671621   49708 cri.go:89] found id: ""
	I1024 20:16:26.671628   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:26.671665   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.675997   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:26.676048   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:26.723100   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:26.723124   49708 cri.go:89] found id: ""
	I1024 20:16:26.723133   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:26.723187   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.727780   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:26.727837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:26.765584   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.765608   49708 cri.go:89] found id: ""
	I1024 20:16:26.765618   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:26.765663   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.770062   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:26.770121   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:26.811710   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:26.811728   49708 cri.go:89] found id: ""
	I1024 20:16:26.811736   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:26.811786   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.816125   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:26.816187   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:26.860427   49708 cri.go:89] found id: ""
	I1024 20:16:26.860452   49708 logs.go:284] 0 containers: []
	W1024 20:16:26.860462   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:26.860469   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:26.860532   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:26.905052   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:26.905083   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:26.905091   49708 cri.go:89] found id: ""
	I1024 20:16:26.905100   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:26.905154   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.909590   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.913618   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:26.913636   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.958127   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:26.958157   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:27.012523   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:27.012555   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:27.059311   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:27.059345   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:27.102879   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:27.102905   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:27.154377   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:27.154409   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:27.197488   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:27.197516   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:27.210530   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:27.210559   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:27.379195   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:27.379225   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:27.826087   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:27.826119   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:27.880305   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:27.880348   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:27.932382   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:27.932417   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:27.979060   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:27.979088   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:29.598843   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:31.598885   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.295090   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:32.295447   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.532134   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:30.547497   49708 api_server.go:72] duration metric: took 4m14.551629626s to wait for apiserver process to appear ...
	I1024 20:16:30.547522   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:30.547562   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:30.547627   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:30.588076   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.588097   49708 cri.go:89] found id: ""
	I1024 20:16:30.588104   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:30.588159   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.592397   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:30.592467   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:30.632362   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:30.632380   49708 cri.go:89] found id: ""
	I1024 20:16:30.632389   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:30.632446   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.636647   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:30.636695   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:30.676966   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:30.676997   49708 cri.go:89] found id: ""
	I1024 20:16:30.677005   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:30.677050   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.682153   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:30.682206   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:30.723427   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:30.723449   49708 cri.go:89] found id: ""
	I1024 20:16:30.723458   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:30.723516   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.727674   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:30.727740   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:30.774450   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:30.774473   49708 cri.go:89] found id: ""
	I1024 20:16:30.774482   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:30.774535   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.778753   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:30.778821   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:30.830068   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:30.830094   49708 cri.go:89] found id: ""
	I1024 20:16:30.830104   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:30.830169   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.835133   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:30.835201   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:30.885323   49708 cri.go:89] found id: ""
	I1024 20:16:30.885347   49708 logs.go:284] 0 containers: []
	W1024 20:16:30.885357   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:30.885363   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:30.885423   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:30.925415   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:30.925435   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:30.925440   49708 cri.go:89] found id: ""
	I1024 20:16:30.925447   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:30.925506   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.929723   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.933926   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:30.933965   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.999217   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:30.999250   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:31.051267   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:31.051300   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:31.107411   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:31.107444   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:31.233980   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:31.234009   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:31.275335   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:31.275362   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:31.329276   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:31.329316   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:31.380149   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:31.380184   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:31.393990   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:31.394016   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:31.440032   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:31.440065   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:31.478413   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:31.478445   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:31.529321   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:31.529349   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:31.578678   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:31.578708   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:33.603558   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.099473   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.295685   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.794759   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.514152   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:16:34.520578   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:16:34.522271   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:34.522289   49708 api_server.go:131] duration metric: took 3.974761353s to wait for apiserver health ...
	I1024 20:16:34.522297   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:34.522318   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:34.522363   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:34.568260   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:34.568280   49708 cri.go:89] found id: ""
	I1024 20:16:34.568287   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:34.568336   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.575356   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:34.575414   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:34.623358   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:34.623383   49708 cri.go:89] found id: ""
	I1024 20:16:34.623392   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:34.623449   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.628721   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:34.628777   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:34.675561   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:34.675583   49708 cri.go:89] found id: ""
	I1024 20:16:34.675592   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:34.675654   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.681613   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:34.681677   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:34.722858   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:34.722898   49708 cri.go:89] found id: ""
	I1024 20:16:34.722917   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:34.722974   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.727310   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:34.727376   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:34.768365   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:34.768383   49708 cri.go:89] found id: ""
	I1024 20:16:34.768390   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:34.768436   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.772776   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:34.772837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:34.825992   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:34.826020   49708 cri.go:89] found id: ""
	I1024 20:16:34.826030   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:34.826083   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.830957   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:34.831011   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:34.878138   49708 cri.go:89] found id: ""
	I1024 20:16:34.878167   49708 logs.go:284] 0 containers: []
	W1024 20:16:34.878175   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:34.878180   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:34.878235   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:34.929288   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.929321   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:34.929328   49708 cri.go:89] found id: ""
	I1024 20:16:34.929338   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:34.929391   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.933731   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.938300   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:34.938326   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.980919   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:34.980944   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:35.021465   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:35.021495   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:35.165907   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:35.165935   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:35.212733   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:35.212759   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:35.620344   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:35.620395   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:35.669555   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:35.669588   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:35.720959   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:35.720987   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:35.762823   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:35.762852   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:35.805994   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:35.806021   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:35.879019   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:35.879046   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:35.941760   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:35.941796   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:35.995475   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:35.995515   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:38.526080   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:38.526106   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.526114   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.526122   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.526128   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.526133   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.526139   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.526150   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.526159   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.526168   49708 system_pods.go:74] duration metric: took 4.003864797s to wait for pod list to return data ...
	I1024 20:16:38.526182   49708 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:38.528827   49708 default_sa.go:45] found service account: "default"
	I1024 20:16:38.528854   49708 default_sa.go:55] duration metric: took 2.662588ms for default service account to be created ...
	I1024 20:16:38.528863   49708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:38.534560   49708 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:38.534579   49708 system_pods.go:89] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.534585   49708 system_pods.go:89] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.534589   49708 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.534594   49708 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.534598   49708 system_pods.go:89] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.534602   49708 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.534610   49708 system_pods.go:89] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.534615   49708 system_pods.go:89] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.534622   49708 system_pods.go:126] duration metric: took 5.753846ms to wait for k8s-apps to be running ...
	I1024 20:16:38.534630   49708 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:38.534668   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:38.549835   49708 system_svc.go:56] duration metric: took 15.197069ms WaitForService to wait for kubelet.
	I1024 20:16:38.549856   49708 kubeadm.go:581] duration metric: took 4m22.553994431s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:38.549878   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:38.553043   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:38.553065   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:38.553076   49708 node_conditions.go:105] duration metric: took 3.193057ms to run NodePressure ...
	I1024 20:16:38.553086   49708 start.go:228] waiting for startup goroutines ...
	I1024 20:16:38.553091   49708 start.go:233] waiting for cluster config update ...
	I1024 20:16:38.553100   49708 start.go:242] writing updated cluster config ...
	I1024 20:16:38.553348   49708 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:38.601183   49708 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:38.603463   49708 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643126" cluster and "default" namespace by default
	I1024 20:16:38.597848   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:40.599437   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:38.795772   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:41.293845   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.096749   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.097165   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:47.097443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.298644   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.797003   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:49.097716   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:51.597754   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:48.295110   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:50.796361   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.600174   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:56.097860   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.295856   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:55.295890   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:57.795597   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:58.097890   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:00.598554   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:59.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:02.295268   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:03.098362   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:05.596632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:04.296575   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:06.296820   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.098450   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:10.597828   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:12.599199   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.795717   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:11.296662   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.097014   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.097844   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:13.794373   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.795134   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.795531   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.098039   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:21.098582   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.796588   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:22.296536   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:23.597792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.098066   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:24.795501   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.796240   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:27.488206   49071 pod_ready.go:81] duration metric: took 4m0.000518995s waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:27.488255   49071 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:27.488267   49071 pod_ready.go:38] duration metric: took 4m4.400905907s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:27.488288   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:17:27.488320   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:27.488379   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:27.544995   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:27.545022   49071 cri.go:89] found id: ""
	I1024 20:17:27.545033   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:27.545116   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.550068   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:27.550127   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:27.595184   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:27.595207   49071 cri.go:89] found id: ""
	I1024 20:17:27.595215   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:27.595265   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.600016   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:27.600075   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:27.644222   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:27.644254   49071 cri.go:89] found id: ""
	I1024 20:17:27.644265   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:27.644321   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.654982   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:27.655048   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:27.697751   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:27.697773   49071 cri.go:89] found id: ""
	I1024 20:17:27.697783   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:27.697838   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.701909   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:27.701969   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:27.746060   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:27.746085   49071 cri.go:89] found id: ""
	I1024 20:17:27.746094   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:27.746147   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.750335   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:27.750392   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:27.791948   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:27.791973   49071 cri.go:89] found id: ""
	I1024 20:17:27.791981   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:27.792045   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.796535   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:27.796616   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:27.839648   49071 cri.go:89] found id: ""
	I1024 20:17:27.839675   49071 logs.go:284] 0 containers: []
	W1024 20:17:27.839683   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:27.839689   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:27.839750   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:27.889284   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.889327   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:27.889334   49071 cri.go:89] found id: ""
	I1024 20:17:27.889343   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:27.889404   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.893661   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.897791   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:27.897819   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.941335   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:27.941369   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:27.954378   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:27.954409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:28.115760   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:28.115792   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:28.171378   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:28.171409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:28.211591   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:28.211620   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:28.247491   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247676   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247811   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247961   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:28.268681   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:28.268717   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:28.099979   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:28.791972   50077 pod_ready.go:81] duration metric: took 4m0.000695315s waiting for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:28.792005   50077 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:28.792032   50077 pod_ready.go:38] duration metric: took 4m1.199949971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:28.792069   50077 kubeadm.go:640] restartCluster took 5m7.653001653s
	W1024 20:17:28.792133   50077 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 20:17:28.792173   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1024 20:17:28.321382   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:28.321413   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:28.364236   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:28.364260   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:28.840985   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:28.841028   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:28.896806   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:28.896846   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:28.948487   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:28.948520   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:28.993469   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:28.993500   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:29.052064   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052102   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:29.052154   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:29.052165   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052174   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052180   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052186   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:29.052191   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052196   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.598790   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.806587354s)
	I1024 20:17:33.598873   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:17:33.614594   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:17:33.625146   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:17:33.635420   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:17:33.635486   50077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 20:17:33.858680   50077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:17:39.053169   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:17:39.069883   49071 api_server.go:72] duration metric: took 4m23.373979574s to wait for apiserver process to appear ...
	I1024 20:17:39.069910   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:17:39.069953   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:39.070015   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:39.116676   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:39.116696   49071 cri.go:89] found id: ""
	I1024 20:17:39.116703   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:39.116752   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.121745   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:39.121814   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:39.174897   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.174932   49071 cri.go:89] found id: ""
	I1024 20:17:39.174943   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:39.175002   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.180933   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:39.181003   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:39.239666   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.239691   49071 cri.go:89] found id: ""
	I1024 20:17:39.239701   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:39.239754   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.244270   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:39.244328   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:39.285405   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.285432   49071 cri.go:89] found id: ""
	I1024 20:17:39.285443   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:39.285503   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.290326   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:39.290393   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:39.330723   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:39.330751   49071 cri.go:89] found id: ""
	I1024 20:17:39.330761   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:39.330816   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.335850   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:39.335917   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:39.375354   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:39.375377   49071 cri.go:89] found id: ""
	I1024 20:17:39.375387   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:39.375449   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.380243   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:39.380313   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:39.424841   49071 cri.go:89] found id: ""
	I1024 20:17:39.424875   49071 logs.go:284] 0 containers: []
	W1024 20:17:39.424885   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:39.424892   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:39.424950   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:39.464134   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.464153   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:39.464160   49071 cri.go:89] found id: ""
	I1024 20:17:39.464168   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:39.464224   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.468810   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.473093   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:39.473128   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:39.507113   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507292   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507432   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507588   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:39.530433   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:39.530479   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:39.666739   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:39.666765   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.710505   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:39.710538   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.749917   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:39.749946   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.799168   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:39.799196   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.846346   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:39.846377   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:40.273032   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:40.273065   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:40.320491   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:40.320521   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:40.378356   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:40.378395   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:40.421618   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:40.421647   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:40.466303   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:40.466334   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:40.478941   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:40.478966   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:40.544618   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544642   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:40.544694   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:40.544706   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544718   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544725   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544733   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:40.544739   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544747   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:46.481686   50077 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1024 20:17:46.481762   50077 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 20:17:46.481861   50077 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:17:46.482000   50077 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:17:46.482104   50077 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:17:46.482236   50077 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:17:46.482362   50077 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:17:46.482486   50077 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1024 20:17:46.482538   50077 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:17:46.484150   50077 out.go:204]   - Generating certificates and keys ...
	I1024 20:17:46.484246   50077 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 20:17:46.484315   50077 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 20:17:46.484402   50077 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 20:17:46.484509   50077 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 20:17:46.484603   50077 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 20:17:46.484689   50077 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 20:17:46.484778   50077 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 20:17:46.484870   50077 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 20:17:46.484972   50077 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 20:17:46.485069   50077 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 20:17:46.485123   50077 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 20:17:46.485200   50077 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:17:46.485263   50077 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:17:46.485343   50077 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:17:46.485430   50077 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:17:46.485503   50077 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:17:46.485590   50077 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:17:46.487065   50077 out.go:204]   - Booting up control plane ...
	I1024 20:17:46.487158   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:17:46.487219   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:17:46.487291   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:17:46.487401   50077 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:17:46.487551   50077 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:17:46.487623   50077 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003664 seconds
	I1024 20:17:46.487756   50077 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:17:46.487882   50077 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:17:46.487940   50077 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:17:46.488123   50077 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-467375 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 20:17:46.488199   50077 kubeadm.go:322] [bootstrap-token] Using token: axp9sy.xsem3c8nzt72b18p
	I1024 20:17:46.490507   50077 out.go:204]   - Configuring RBAC rules ...
	I1024 20:17:46.490603   50077 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:17:46.490719   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:17:46.490832   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:17:46.490938   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:17:46.491009   50077 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:17:46.491044   50077 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 20:17:46.491083   50077 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 20:17:46.491091   50077 kubeadm.go:322] 
	I1024 20:17:46.491151   50077 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 20:17:46.491163   50077 kubeadm.go:322] 
	I1024 20:17:46.491224   50077 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 20:17:46.491231   50077 kubeadm.go:322] 
	I1024 20:17:46.491260   50077 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 20:17:46.491346   50077 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:17:46.491409   50077 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:17:46.491419   50077 kubeadm.go:322] 
	I1024 20:17:46.491511   50077 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 20:17:46.491621   50077 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:17:46.491715   50077 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:17:46.491725   50077 kubeadm.go:322] 
	I1024 20:17:46.491829   50077 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1024 20:17:46.491929   50077 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 20:17:46.491937   50077 kubeadm.go:322] 
	I1024 20:17:46.492064   50077 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492249   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 20:17:46.492292   50077 kubeadm.go:322]     --control-plane 	  
	I1024 20:17:46.492302   50077 kubeadm.go:322] 
	I1024 20:17:46.492423   50077 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:17:46.492435   50077 kubeadm.go:322] 
	I1024 20:17:46.492532   50077 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492675   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 20:17:46.492686   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:17:46.492694   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:17:46.494152   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:17:46.495677   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:17:46.510639   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:17:46.539872   50077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:17:46.539933   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.539945   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=old-k8s-version-467375 minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.984338   50077 ops.go:34] apiserver oom_adj: -16
	I1024 20:17:46.984391   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.163022   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.798557   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.298499   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.798506   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.298076   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.798120   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.298504   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.298777   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.798477   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.298309   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.798243   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.546645   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:17:50.552245   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:17:50.553721   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:17:50.553747   49071 api_server.go:131] duration metric: took 11.483829454s to wait for apiserver health ...
	I1024 20:17:50.553757   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:17:50.553784   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:50.553844   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:50.594504   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:50.594528   49071 cri.go:89] found id: ""
	I1024 20:17:50.594536   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:50.594586   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.598912   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:50.598963   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:50.644339   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:50.644355   49071 cri.go:89] found id: ""
	I1024 20:17:50.644362   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:50.644406   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.649046   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:50.649099   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:50.688245   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:50.688268   49071 cri.go:89] found id: ""
	I1024 20:17:50.688278   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:50.688330   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.692382   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:50.692429   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:50.736359   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:50.736384   49071 cri.go:89] found id: ""
	I1024 20:17:50.736393   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:50.736451   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.741226   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:50.741287   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:50.797894   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:50.797920   49071 cri.go:89] found id: ""
	I1024 20:17:50.797930   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:50.797997   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.802725   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:50.802781   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:50.851081   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:50.851106   49071 cri.go:89] found id: ""
	I1024 20:17:50.851115   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:50.851166   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.855549   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:50.855600   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:50.909237   49071 cri.go:89] found id: ""
	I1024 20:17:50.909265   49071 logs.go:284] 0 containers: []
	W1024 20:17:50.909276   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:50.909283   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:50.909355   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:50.958541   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:50.958567   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:50.958574   49071 cri.go:89] found id: ""
	I1024 20:17:50.958583   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:50.958638   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.962947   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.967261   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:50.967283   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:51.087158   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:51.087190   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:51.144421   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:51.144458   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:51.200040   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:51.200072   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:51.255703   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:51.255740   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:51.683831   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:51.683869   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:51.726821   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:51.726856   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:51.776977   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:51.777006   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:51.822826   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:51.822861   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:51.873557   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.873838   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874063   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874313   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:51.900648   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:51.900690   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:51.916123   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:51.916161   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:51.960440   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:51.960470   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:52.010020   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:52.010051   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:52.051039   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051063   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:52.051113   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:52.051127   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051142   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051162   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051173   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:52.051183   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051190   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:53.298168   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:53.798546   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.298175   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.798534   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.298510   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.798562   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.297914   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.797930   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.298527   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.298630   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.798550   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.298526   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.798537   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.298538   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.798072   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:01.014502   50077 kubeadm.go:1081] duration metric: took 14.474620601s to wait for elevateKubeSystemPrivileges.
	I1024 20:18:01.014547   50077 kubeadm.go:406] StartCluster complete in 5m39.9402605s
	I1024 20:18:01.014569   50077 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.014667   50077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:18:01.017210   50077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.017539   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:18:01.017574   50077 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:18:01.017659   50077 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017666   50077 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017677   50077 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-467375"
	W1024 20:18:01.017690   50077 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:18:01.017695   50077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-467375"
	I1024 20:18:01.017699   50077 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017718   50077 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-467375"
	W1024 20:18:01.017727   50077 addons.go:240] addon metrics-server should already be in state true
	I1024 20:18:01.017731   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017777   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017816   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:18:01.018053   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018088   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018111   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018122   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018149   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018257   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.036179   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1024 20:18:01.036834   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.037477   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.037504   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.037665   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I1024 20:18:01.037824   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I1024 20:18:01.037912   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.038074   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.038220   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038306   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038850   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.038867   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039010   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.039021   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039391   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039410   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039925   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.039949   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.039974   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.040014   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.041243   50077 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-467375"
	W1024 20:18:01.041258   50077 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:18:01.041277   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.041611   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.041645   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.056254   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1024 20:18:01.056888   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.057215   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I1024 20:18:01.057487   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.057502   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.057895   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.057956   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.058536   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.058574   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.058844   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.058857   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.058929   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I1024 20:18:01.059172   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.059288   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.059451   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.059964   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.059975   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.060353   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.060565   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.061107   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.062802   50077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:18:01.064189   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:18:01.064209   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:18:01.064230   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.062154   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.066082   50077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:18:01.067046   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.067880   50077 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.067901   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:18:01.067921   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.068400   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.068432   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.069073   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.069343   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.069484   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.069587   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.071678   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072196   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.072220   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072596   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.072776   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.072905   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.073043   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.079576   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I1024 20:18:01.080025   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.080592   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.080613   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.081035   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.081240   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.083090   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.083404   50077 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.083425   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:18:01.083443   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.086433   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.086802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.086824   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.087003   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.087198   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.087348   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.087506   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.197205   50077 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-467375" context rescaled to 1 replicas
	I1024 20:18:01.197249   50077 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:18:01.200328   50077 out.go:177] * Verifying Kubernetes components...
	I1024 20:18:02.061971   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:18:02.062015   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.062024   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.062031   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.062040   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.062047   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.062054   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.062066   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.062078   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.062086   49071 system_pods.go:74] duration metric: took 11.508322005s to wait for pod list to return data ...
	I1024 20:18:02.062098   49071 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:02.065560   49071 default_sa.go:45] found service account: "default"
	I1024 20:18:02.065585   49071 default_sa.go:55] duration metric: took 3.476366ms for default service account to be created ...
	I1024 20:18:02.065595   49071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:02.073224   49071 system_pods.go:86] 8 kube-system pods found
	I1024 20:18:02.073253   49071 system_pods.go:89] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.073262   49071 system_pods.go:89] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.073269   49071 system_pods.go:89] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.073277   49071 system_pods.go:89] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.073284   49071 system_pods.go:89] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.073290   49071 system_pods.go:89] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.073313   49071 system_pods.go:89] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.073326   49071 system_pods.go:89] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.073335   49071 system_pods.go:126] duration metric: took 7.733883ms to wait for k8s-apps to be running ...
	I1024 20:18:02.073346   49071 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:18:02.073405   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:02.093085   49071 system_svc.go:56] duration metric: took 19.727658ms WaitForService to wait for kubelet.
	I1024 20:18:02.093113   49071 kubeadm.go:581] duration metric: took 4m46.397215509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:18:02.093135   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:18:02.101982   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:18:02.102007   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:18:02.102018   49071 node_conditions.go:105] duration metric: took 8.878046ms to run NodePressure ...
	I1024 20:18:02.102035   49071 start.go:228] waiting for startup goroutines ...
	I1024 20:18:02.102041   49071 start.go:233] waiting for cluster config update ...
	I1024 20:18:02.102054   49071 start.go:242] writing updated cluster config ...
	I1024 20:18:02.102767   49071 ssh_runner.go:195] Run: rm -f paused
	I1024 20:18:02.159693   49071 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:18:02.161831   49071 out.go:177] * Done! kubectl is now configured to use "no-preload-014826" cluster and "default" namespace by default
	I1024 20:18:01.201778   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:01.315241   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.335753   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.339160   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:18:01.339182   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:18:01.376704   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:18:01.376726   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:18:01.385150   50077 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.385223   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 20:18:01.443957   50077 node_ready.go:49] node "old-k8s-version-467375" has status "Ready":"True"
	I1024 20:18:01.443978   50077 node_ready.go:38] duration metric: took 58.799937ms waiting for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.443987   50077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:01.453968   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:01.453998   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:18:01.481599   50077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:01.543065   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:02.715998   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.400725332s)
	I1024 20:18:02.716049   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716062   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716066   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.38027937s)
	I1024 20:18:02.716103   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716120   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716152   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.330913087s)
	I1024 20:18:02.716170   50077 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 20:18:02.716377   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716392   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716402   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716410   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716512   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.716522   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716536   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716547   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716557   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716623   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716637   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.717532   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.717534   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.717554   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.790444   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.790480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.790901   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.790925   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895176   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.352065096s)
	I1024 20:18:02.895243   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895268   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895611   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895630   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895634   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.895639   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895875   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895888   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895905   50077 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-467375"
	I1024 20:18:02.897664   50077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:18:02.899508   50077 addons.go:502] enable addons completed in 1.881940564s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:18:03.719917   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:06.207388   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:08.207967   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:10.708258   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:12.208133   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.208155   50077 pod_ready.go:81] duration metric: took 10.726531733s waiting for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.208166   50077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213213   50077 pod_ready.go:92] pod "kube-proxy-9bpht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.213237   50077 pod_ready.go:81] duration metric: took 5.063943ms waiting for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213247   50077 pod_ready.go:38] duration metric: took 10.769249135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:12.213267   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:18:12.213344   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:18:12.228272   50077 api_server.go:72] duration metric: took 11.030986098s to wait for apiserver process to appear ...
	I1024 20:18:12.228295   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:18:12.228313   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:18:12.234663   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:18:12.235584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:18:12.235599   50077 api_server.go:131] duration metric: took 7.297294ms to wait for apiserver health ...
	I1024 20:18:12.235605   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:18:12.239203   50077 system_pods.go:59] 4 kube-system pods found
	I1024 20:18:12.239228   50077 system_pods.go:61] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.239235   50077 system_pods.go:61] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.239246   50077 system_pods.go:61] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.239292   50077 system_pods.go:61] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.239307   50077 system_pods.go:74] duration metric: took 3.696523ms to wait for pod list to return data ...
	I1024 20:18:12.239315   50077 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:12.242065   50077 default_sa.go:45] found service account: "default"
	I1024 20:18:12.242080   50077 default_sa.go:55] duration metric: took 2.760528ms for default service account to be created ...
	I1024 20:18:12.242086   50077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:12.245602   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.245624   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.245631   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.245640   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.245648   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.245664   50077 retry.go:31] will retry after 287.935783ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.538837   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.538900   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.538924   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.538942   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.538955   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.538979   50077 retry.go:31] will retry after 320.680304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.864800   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.864826   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.864832   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.864838   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.864844   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.864858   50077 retry.go:31] will retry after 364.04425ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.233903   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.233927   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.233934   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.233941   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.233946   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.233974   50077 retry.go:31] will retry after 559.821457ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.799208   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.799234   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.799240   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.799246   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.799252   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.799266   50077 retry.go:31] will retry after 522.263157ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.325735   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.325767   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.325776   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.325789   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.325799   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.325817   50077 retry.go:31] will retry after 668.137602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.999589   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.999614   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.999620   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.999626   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.999632   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.999646   50077 retry.go:31] will retry after 859.983274ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:15.865531   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:15.865556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:15.865561   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:15.865568   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:15.865573   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:15.865589   50077 retry.go:31] will retry after 1.238765858s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:17.109999   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:17.110023   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:17.110028   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:17.110035   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:17.110041   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:17.110054   50077 retry.go:31] will retry after 1.485428629s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:18.600612   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:18.600635   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:18.600641   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:18.600647   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:18.600652   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:18.600665   50077 retry.go:31] will retry after 2.290652681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:20.897529   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:20.897556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:20.897562   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:20.897571   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:20.897577   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:20.897593   50077 retry.go:31] will retry after 2.367552906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:23.270766   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:23.270792   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:23.270800   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:23.270810   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:23.270817   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:23.270834   50077 retry.go:31] will retry after 2.861357376s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:26.136663   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:26.136696   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:26.136704   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:26.136715   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:26.136725   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:26.136743   50077 retry.go:31] will retry after 3.526737387s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:29.670148   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:29.670175   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:29.670181   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:29.670188   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:29.670195   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:29.670215   50077 retry.go:31] will retry after 5.450931485s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:35.125964   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:35.125989   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:35.125994   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:35.126001   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:35.126007   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:35.126022   50077 retry.go:31] will retry after 5.914408322s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:41.046649   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:41.046670   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:41.046677   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:41.046684   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:41.046690   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:41.046704   50077 retry.go:31] will retry after 6.748980526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:47.802189   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:47.802212   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:47.802217   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:47.802225   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:47.802230   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:47.802244   50077 retry.go:31] will retry after 8.662562452s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:56.471025   50077 system_pods.go:86] 7 kube-system pods found
	I1024 20:18:56.471062   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:56.471071   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:18:56.471079   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:18:56.471086   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:56.471094   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Pending
	I1024 20:18:56.471108   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:56.471121   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:56.471142   50077 retry.go:31] will retry after 10.356793998s: missing components: etcd, kube-scheduler
	I1024 20:19:06.834711   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:06.834741   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:06.834749   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Pending
	I1024 20:19:06.834755   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:06.834762   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:06.834767   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:06.834772   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:06.834782   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:06.834792   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:06.834809   50077 retry.go:31] will retry after 14.609583217s: missing components: etcd
	I1024 20:19:21.450651   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:21.450678   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:21.450685   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Running
	I1024 20:19:21.450689   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:21.450693   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:21.450699   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:21.450709   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:21.450719   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:21.450732   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:21.450745   50077 system_pods.go:126] duration metric: took 1m9.20865321s to wait for k8s-apps to be running ...
	I1024 20:19:21.450757   50077 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:19:21.450800   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:19:21.468030   50077 system_svc.go:56] duration metric: took 17.254248ms WaitForService to wait for kubelet.
	I1024 20:19:21.468061   50077 kubeadm.go:581] duration metric: took 1m20.270780436s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:19:21.468089   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:19:21.471958   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:19:21.471982   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:19:21.471993   50077 node_conditions.go:105] duration metric: took 3.898893ms to run NodePressure ...
	I1024 20:19:21.472003   50077 start.go:228] waiting for startup goroutines ...
	I1024 20:19:21.472008   50077 start.go:233] waiting for cluster config update ...
	I1024 20:19:21.472018   50077 start.go:242] writing updated cluster config ...
	I1024 20:19:21.472257   50077 ssh_runner.go:195] Run: rm -f paused
	I1024 20:19:21.520082   50077 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1024 20:19:21.522545   50077 out.go:177] 
	W1024 20:19:21.524125   50077 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1024 20:19:21.525515   50077 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1024 20:19:21.527113   50077 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-467375" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:12:24 UTC, ends at Tue 2023-10-24 20:27:04 UTC. --
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.857095156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179223857082618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a979c04c-dd20-4df4-81df-a53ac4663c9d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.857574685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6110c20a-00dd-4614-8730-e84863216f7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.857711347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6110c20a-00dd-4614-8730-e84863216f7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.857959932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6110c20a-00dd-4614-8730-e84863216f7d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.897434641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=07128c27-12ca-4150-881a-e587f3bda1a1 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.897521708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=07128c27-12ca-4150-881a-e587f3bda1a1 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.899705690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7ff3ccee-adfb-4601-9ba0-28d2ec36a87d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.900197143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179223900176267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=7ff3ccee-adfb-4601-9ba0-28d2ec36a87d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.901011383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9cc6bbd-9d6b-4cee-9d53-e9de7f94dd14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.901087700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9cc6bbd-9d6b-4cee-9d53-e9de7f94dd14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.901473504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9cc6bbd-9d6b-4cee-9d53-e9de7f94dd14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.942432477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b5fe46d1-300b-4b2e-a9e7-757d3a5b4bcd name=/runtime.v1.RuntimeService/Version
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.942492777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b5fe46d1-300b-4b2e-a9e7-757d3a5b4bcd name=/runtime.v1.RuntimeService/Version
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.944298497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fca5b01f-b1b9-4803-8b4a-435bba164ff7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.944762044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179223944749839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=fca5b01f-b1b9-4803-8b4a-435bba164ff7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.945484813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=73dea0ce-0aea-4896-9275-9b135b70e106 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.945566659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=73dea0ce-0aea-4896-9275-9b135b70e106 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.945817400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=73dea0ce-0aea-4896-9275-9b135b70e106 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.982007780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fd731901-e040-4073-a2bb-abb55e9e97e2 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.982099552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fd731901-e040-4073-a2bb-abb55e9e97e2 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.984029080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=71d34749-8d15-413a-9b4d-76940e172d88 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.984493062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179223984471981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=71d34749-8d15-413a-9b4d-76940e172d88 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.985070751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bfad7ff8-e5d5-4fb8-8cec-9328991106a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.985150483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bfad7ff8-e5d5-4fb8-8cec-9328991106a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:27:03 no-preload-014826 crio[709]: time="2023-10-24 20:27:03.985333015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bfad7ff8-e5d5-4fb8-8cec-9328991106a8 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d89cb6110d0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   8f828f4fe169d       storage-provisioner
	615a725b971e1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   143351ce77884       busybox
	94df20bf68998       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   e375bca1f8d8a       coredns-5dd5756b68-gnn8j
	7e817e194cdec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8f828f4fe169d       storage-provisioner
	bc751572f7c36       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      13 minutes ago      Running             kube-proxy                1                   1764bdf6a0432       kube-proxy-hvphg
	458ce37f1738a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      13 minutes ago      Running             kube-scheduler            1                   d059d8d893a6b       kube-scheduler-no-preload-014826
	cb13ad95dea1a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   0e25781568178       etcd-no-preload-014826
	153d53cd79d89       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      13 minutes ago      Running             kube-controller-manager   1                   2b9b47333434f       kube-controller-manager-no-preload-014826
	c440cb516cdfb       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      13 minutes ago      Running             kube-apiserver            1                   c64448b4c09a0       kube-apiserver-no-preload-014826
	
	* 
	* ==> coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51339 - 58575 "HINFO IN 969512186226067403.7834173540402370385. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007987292s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-014826
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-014826
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=no-preload-014826
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_02_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:02:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-014826
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:27:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:23:56 +0000   Tue, 24 Oct 2023 20:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:23:56 +0000   Tue, 24 Oct 2023 20:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:23:56 +0000   Tue, 24 Oct 2023 20:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:23:56 +0000   Tue, 24 Oct 2023 20:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.162
	  Hostname:    no-preload-014826
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a313d69995c3482a9dba11eb665ee614
	  System UUID:                a313d699-95c3-482a-9dba-11eb665ee614
	  Boot ID:                    f6c96220-fb67-4529-bb83-eeb630a3972c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-gnn8j                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-no-preload-014826                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kube-apiserver-no-preload-014826             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-no-preload-014826    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-hvphg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-no-preload-014826             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 metrics-server-57f55c9bc5-tsfvs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node no-preload-014826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     24m                kubelet          Node no-preload-014826 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  24m                kubelet          Node no-preload-014826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m                kubelet          Node no-preload-014826 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                kubelet          Node no-preload-014826 status is now: NodeReady
	  Normal  RegisteredNode           24m                node-controller  Node no-preload-014826 event: Registered Node no-preload-014826 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-014826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-014826 event: Registered Node no-preload-014826 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 20:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069873] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942386] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.660930] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144777] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.614614] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.570430] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.125801] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.151362] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.123736] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.233389] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[Oct24 20:13] systemd-fstab-generator[1268]: Ignoring "noauto" for root device
	[ +15.344562] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] <==
	* {"level":"info","ts":"2023-10-24T20:13:09.362254Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2de4c6e2c9b44383","local-member-id":"25a84ac227828bb5","added-peer-id":"25a84ac227828bb5","added-peer-peer-urls":["https://192.168.50.162:2380"]}
	{"level":"info","ts":"2023-10-24T20:13:09.362318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2de4c6e2c9b44383","local-member-id":"25a84ac227828bb5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:13:09.362338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:13:09.365174Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T20:13:09.365365Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"25a84ac227828bb5","initial-advertise-peer-urls":["https://192.168.50.162:2380"],"listen-peer-urls":["https://192.168.50.162:2380"],"advertise-client-urls":["https://192.168.50.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T20:13:09.365415Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T20:13:09.365542Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.162:2380"}
	{"level":"info","ts":"2023-10-24T20:13:09.365566Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.162:2380"}
	{"level":"info","ts":"2023-10-24T20:13:10.980448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T20:13:10.98061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T20:13:10.980766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 received MsgPreVoteResp from 25a84ac227828bb5 at term 2"}
	{"level":"info","ts":"2023-10-24T20:13:10.980807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.980831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 received MsgVoteResp from 25a84ac227828bb5 at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.980858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 became leader at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.980884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 25a84ac227828bb5 elected leader 25a84ac227828bb5 at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.983459Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"25a84ac227828bb5","local-member-attributes":"{Name:no-preload-014826 ClientURLs:[https://192.168.50.162:2379]}","request-path":"/0/members/25a84ac227828bb5/attributes","cluster-id":"2de4c6e2c9b44383","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T20:13:10.983473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:13:10.983795Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T20:13:10.983836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T20:13:10.983503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:13:10.985084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T20:13:10.98579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.162:2379"}
	{"level":"info","ts":"2023-10-24T20:23:11.017524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":871}
	{"level":"info","ts":"2023-10-24T20:23:11.020739Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":871,"took":"2.847172ms","hash":2526692423}
	{"level":"info","ts":"2023-10-24T20:23:11.020802Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2526692423,"revision":871,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:27:04 up 14 min,  0 users,  load average: 0.06, 0.17, 0.15
	Linux no-preload-014826 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] <==
	* I1024 20:23:12.611445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:23:13.611813       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:23:13.611872       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:23:13.611882       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:23:13.611955       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:23:13.612039       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:23:13.613072       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:24:12.444817       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:24:13.612437       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:24:13.612532       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:24:13.612560       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:24:13.613786       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:24:13.613915       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:24:13.613954       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:25:12.444881       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 20:26:12.445251       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:26:13.613615       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:26:13.613877       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:26:13.613949       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:26:13.614168       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:26:13.614405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:26:13.615316       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] <==
	* I1024 20:21:25.551080       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:21:55.074003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:21:55.559890       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:22:25.082962       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:22:25.570008       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:22:55.090304       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:22:55.579611       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:23:25.098172       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:23:25.588278       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:23:55.104292       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:23:55.599493       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:24:12.623045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="323.044µs"
	E1024 20:24:25.110483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:24:25.621336       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:24:25.625948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="95.626µs"
	E1024 20:24:55.117038       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:24:55.630084       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:25:25.123124       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:25:25.640718       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:25:55.129206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:25:55.650320       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:26:25.135999       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:26:25.659319       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:26:55.142024       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:26:55.667482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] <==
	* I1024 20:13:15.212378       1 server_others.go:69] "Using iptables proxy"
	I1024 20:13:15.223122       1 node.go:141] Successfully retrieved node IP: 192.168.50.162
	I1024 20:13:15.265823       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:13:15.265882       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:13:15.269409       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:13:15.269489       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:13:15.269925       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:13:15.269977       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:13:15.271151       1 config.go:188] "Starting service config controller"
	I1024 20:13:15.271211       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:13:15.271240       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:13:15.271246       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:13:15.271977       1 config.go:315] "Starting node config controller"
	I1024 20:13:15.272031       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:13:15.371403       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:13:15.371523       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:13:15.372122       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] <==
	* I1024 20:13:09.715128       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:13:12.555392       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:13:12.555519       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:13:12.555533       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:13:12.555541       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:13:12.611712       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:13:12.611756       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:13:12.614485       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:13:12.614543       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:13:12.617356       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:13:12.617507       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:13:12.715058       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:12:24 UTC, ends at Tue 2023-10-24 20:27:04 UTC. --
	Oct 24 20:24:05 no-preload-014826 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:24:05 no-preload-014826 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:24:12 no-preload-014826 kubelet[1274]: E1024 20:24:12.604211    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:24:25 no-preload-014826 kubelet[1274]: E1024 20:24:25.605253    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:24:36 no-preload-014826 kubelet[1274]: E1024 20:24:36.603217    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:24:48 no-preload-014826 kubelet[1274]: E1024 20:24:48.602763    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:24:59 no-preload-014826 kubelet[1274]: E1024 20:24:59.604331    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:25:05 no-preload-014826 kubelet[1274]: E1024 20:25:05.626439    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:25:05 no-preload-014826 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:25:05 no-preload-014826 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:25:05 no-preload-014826 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:25:10 no-preload-014826 kubelet[1274]: E1024 20:25:10.603545    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:25:22 no-preload-014826 kubelet[1274]: E1024 20:25:22.603900    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:25:33 no-preload-014826 kubelet[1274]: E1024 20:25:33.603912    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:25:44 no-preload-014826 kubelet[1274]: E1024 20:25:44.603846    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:25:56 no-preload-014826 kubelet[1274]: E1024 20:25:56.603333    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:26:05 no-preload-014826 kubelet[1274]: E1024 20:26:05.630064    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:26:05 no-preload-014826 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:26:05 no-preload-014826 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:26:05 no-preload-014826 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:26:10 no-preload-014826 kubelet[1274]: E1024 20:26:10.603435    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:26:23 no-preload-014826 kubelet[1274]: E1024 20:26:23.604416    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:26:34 no-preload-014826 kubelet[1274]: E1024 20:26:34.603764    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:26:46 no-preload-014826 kubelet[1274]: E1024 20:26:46.603725    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:26:57 no-preload-014826 kubelet[1274]: E1024 20:26:57.605923    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	
	* 
	* ==> storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] <==
	* I1024 20:13:45.981835       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:13:46.002048       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:13:46.002129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:14:03.408423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:14:03.408898       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-014826_54aa25c2-eba0-4c08-953b-3098a3702b2c!
	I1024 20:14:03.413355       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02c1f45f-0c51-43a7-ac75-c7a0932ce4e8", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-014826_54aa25c2-eba0-4c08-953b-3098a3702b2c became leader
	I1024 20:14:03.512020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-014826_54aa25c2-eba0-4c08-953b-3098a3702b2c!
	
	* 
	* ==> storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] <==
	* I1024 20:13:15.180950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 20:13:45.184772       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014826 -n no-preload-014826
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-014826 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tsfvs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-014826 describe pod metrics-server-57f55c9bc5-tsfvs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-014826 describe pod metrics-server-57f55c9bc5-tsfvs: exit status 1 (75.144241ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tsfvs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-014826 describe pod metrics-server-57f55c9bc5-tsfvs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1024 20:19:42.155059   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 20:21:00.584856   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 20:23:10.558368   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 20:23:19.103779   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 20:24:33.605687   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467375 -n old-k8s-version-467375
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:28:22.110559658 +0000 UTC m=+5263.876038662
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467375 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-467375 logs -n 25: (1.621457468s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-636215                                        | pause-636215                 | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-145190                              | stopped-upgrade-145190       | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:09:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:09:32.850310   50077 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:09:32.850450   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850462   50077 out.go:309] Setting ErrFile to fd 2...
	I1024 20:09:32.850470   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850632   50077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:09:32.851152   50077 out.go:303] Setting JSON to false
	I1024 20:09:32.851985   50077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6471,"bootTime":1698171702,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:09:32.852046   50077 start.go:138] virtualization: kvm guest
	I1024 20:09:32.854420   50077 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:09:32.855945   50077 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:09:32.855955   50077 notify.go:220] Checking for updates...
	I1024 20:09:32.857502   50077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:09:32.858984   50077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:09:32.860444   50077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:09:32.861833   50077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:09:32.863229   50077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:09:32.864917   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:09:32.865284   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.865345   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.879470   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1024 20:09:32.879865   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.880332   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.880355   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.880731   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.880894   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.882647   50077 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:09:32.884050   50077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:09:32.884316   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.884351   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.897671   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I1024 20:09:32.898054   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.898495   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.898521   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.898837   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.899002   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.933365   50077 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:09:32.934993   50077 start.go:298] selected driver: kvm2
	I1024 20:09:32.935008   50077 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.935100   50077 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:09:32.935713   50077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.935789   50077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:09:32.949274   50077 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:09:32.949613   50077 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:09:32.949670   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:09:32.949682   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:09:32.949693   50077 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.949823   50077 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.951734   50077 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:09:31.289529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:32.953102   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:09:32.953131   50077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:09:32.953140   50077 cache.go:57] Caching tarball of preloaded images
	I1024 20:09:32.953220   50077 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:09:32.953230   50077 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:09:32.953361   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:09:32.953531   50077 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:09:37.369555   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:40.441571   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:46.521544   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:49.593529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:55.673497   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:58.745605   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:04.825563   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:07.897530   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:13.977541   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:17.049658   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:23.129561   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:26.201528   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:32.281583   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:35.353592   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:41.433570   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:44.505586   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:50.585514   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:53.657506   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:59.737620   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:02.809631   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:05.812536   49198 start.go:369] acquired machines lock for "embed-certs-867165" in 4m26.940203259s
	I1024 20:11:05.812584   49198 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:05.812594   49198 fix.go:54] fixHost starting: 
	I1024 20:11:05.812911   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:05.812959   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:05.827853   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I1024 20:11:05.828400   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:05.828896   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:05.828922   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:05.829237   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:05.829432   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:05.829588   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:05.831229   49198 fix.go:102] recreateIfNeeded on embed-certs-867165: state=Stopped err=<nil>
	I1024 20:11:05.831249   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	W1024 20:11:05.831407   49198 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:05.833007   49198 out.go:177] * Restarting existing kvm2 VM for "embed-certs-867165" ...
	I1024 20:11:05.810496   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:05.810546   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:11:05.812388   49071 machine.go:91] provisioned docker machine in 4m37.419019216s
	I1024 20:11:05.812422   49071 fix.go:56] fixHost completed within 4m37.4383256s
	I1024 20:11:05.812427   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 4m37.438344867s
	W1024 20:11:05.812453   49071 start.go:691] error starting host: provision: host is not running
	W1024 20:11:05.812538   49071 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1024 20:11:05.812551   49071 start.go:706] Will try again in 5 seconds ...
	I1024 20:11:05.834235   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Start
	I1024 20:11:05.834397   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring networks are active...
	I1024 20:11:05.835212   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network default is active
	I1024 20:11:05.835540   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network mk-embed-certs-867165 is active
	I1024 20:11:05.835850   49198 main.go:141] libmachine: (embed-certs-867165) Getting domain xml...
	I1024 20:11:05.836556   49198 main.go:141] libmachine: (embed-certs-867165) Creating domain...
	I1024 20:11:07.054253   49198 main.go:141] libmachine: (embed-certs-867165) Waiting to get IP...
	I1024 20:11:07.055379   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.055819   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.055911   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.055829   50328 retry.go:31] will retry after 212.147571ms: waiting for machine to come up
	I1024 20:11:07.269505   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.269953   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.270002   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.269942   50328 retry.go:31] will retry after 308.705783ms: waiting for machine to come up
	I1024 20:11:07.580602   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.581075   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.581103   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.581041   50328 retry.go:31] will retry after 467.682838ms: waiting for machine to come up
	I1024 20:11:08.050725   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.051121   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.051154   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.051070   50328 retry.go:31] will retry after 399.648518ms: waiting for machine to come up
	I1024 20:11:08.452605   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.452968   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.452999   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.452906   50328 retry.go:31] will retry after 617.165915ms: waiting for machine to come up
	I1024 20:11:10.812763   49071 start.go:365] acquiring machines lock for no-preload-014826: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:11:09.071803   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.072236   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.072268   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.072205   50328 retry.go:31] will retry after 678.895198ms: waiting for machine to come up
	I1024 20:11:09.753179   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.753658   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.753689   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.753600   50328 retry.go:31] will retry after 807.254598ms: waiting for machine to come up
	I1024 20:11:10.562345   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:10.562733   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:10.562761   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:10.562688   50328 retry.go:31] will retry after 921.950476ms: waiting for machine to come up
	I1024 20:11:11.485981   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:11.486498   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:11.486524   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:11.486452   50328 retry.go:31] will retry after 1.56679652s: waiting for machine to come up
	I1024 20:11:13.055209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:13.055638   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:13.055664   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:13.055594   50328 retry.go:31] will retry after 2.296157501s: waiting for machine to come up
	I1024 20:11:15.355156   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:15.355522   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:15.355555   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:15.355460   50328 retry.go:31] will retry after 1.913484523s: waiting for machine to come up
	I1024 20:11:17.270771   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:17.271200   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:17.271237   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:17.271154   50328 retry.go:31] will retry after 2.867410465s: waiting for machine to come up
	I1024 20:11:20.142209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:20.142651   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:20.142675   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:20.142603   50328 retry.go:31] will retry after 4.193720328s: waiting for machine to come up
	I1024 20:11:25.925856   49708 start.go:369] acquired machines lock for "default-k8s-diff-port-643126" in 3m22.313323811s
	I1024 20:11:25.925904   49708 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:25.925911   49708 fix.go:54] fixHost starting: 
	I1024 20:11:25.926296   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:25.926331   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:25.942871   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I1024 20:11:25.943321   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:25.943866   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:11:25.943890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:25.944187   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:25.944359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:25.944510   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:11:25.945833   49708 fix.go:102] recreateIfNeeded on default-k8s-diff-port-643126: state=Stopped err=<nil>
	I1024 20:11:25.945875   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	W1024 20:11:25.946039   49708 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:25.949057   49708 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-643126" ...
	I1024 20:11:24.340353   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.340876   49198 main.go:141] libmachine: (embed-certs-867165) Found IP for machine: 192.168.72.10
	I1024 20:11:24.340899   49198 main.go:141] libmachine: (embed-certs-867165) Reserving static IP address...
	I1024 20:11:24.340912   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has current primary IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.341389   49198 main.go:141] libmachine: (embed-certs-867165) Reserved static IP address: 192.168.72.10
	I1024 20:11:24.341430   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.341453   49198 main.go:141] libmachine: (embed-certs-867165) Waiting for SSH to be available...
	I1024 20:11:24.341482   49198 main.go:141] libmachine: (embed-certs-867165) DBG | skip adding static IP to network mk-embed-certs-867165 - found existing host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"}
	I1024 20:11:24.341500   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Getting to WaitForSSH function...
	I1024 20:11:24.343707   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344021   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.344050   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344202   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH client type: external
	I1024 20:11:24.344229   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa (-rw-------)
	I1024 20:11:24.344263   49198 main.go:141] libmachine: (embed-certs-867165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:24.344279   49198 main.go:141] libmachine: (embed-certs-867165) DBG | About to run SSH command:
	I1024 20:11:24.344290   49198 main.go:141] libmachine: (embed-certs-867165) DBG | exit 0
	I1024 20:11:24.433113   49198 main.go:141] libmachine: (embed-certs-867165) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:24.433578   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetConfigRaw
	I1024 20:11:24.434267   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.436768   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437149   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.437178   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437479   49198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/config.json ...
	I1024 20:11:24.437738   49198 machine.go:88] provisioning docker machine ...
	I1024 20:11:24.437760   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:24.438014   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438217   49198 buildroot.go:166] provisioning hostname "embed-certs-867165"
	I1024 20:11:24.438245   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438431   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.440509   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440861   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.440884   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440998   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.441155   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441329   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441499   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.441644   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.441990   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.442009   49198 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-867165 && echo "embed-certs-867165" | sudo tee /etc/hostname
	I1024 20:11:24.570417   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-867165
	
	I1024 20:11:24.570456   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.573010   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573421   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.573446   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573634   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.573845   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574000   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574100   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.574296   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.574611   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.574628   49198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-867165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-867165/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-867165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:24.698255   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:24.698281   49198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:24.698298   49198 buildroot.go:174] setting up certificates
	I1024 20:11:24.698306   49198 provision.go:83] configureAuth start
	I1024 20:11:24.698317   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.698624   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.701552   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.701900   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.701954   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.702044   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.704047   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704389   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.704413   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704578   49198 provision.go:138] copyHostCerts
	I1024 20:11:24.704632   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:24.704648   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:24.704713   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:24.704794   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:24.704801   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:24.704828   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:24.704877   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:24.704883   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:24.704901   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:24.704961   49198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.embed-certs-867165 san=[192.168.72.10 192.168.72.10 localhost 127.0.0.1 minikube embed-certs-867165]
	I1024 20:11:25.212018   49198 provision.go:172] copyRemoteCerts
	I1024 20:11:25.212075   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:25.212095   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.214791   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215112   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.215141   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215262   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.215490   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.215682   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.215805   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.301782   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:25.324352   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1024 20:11:25.346349   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:11:25.368012   49198 provision.go:86] duration metric: configureAuth took 669.695412ms
	I1024 20:11:25.368036   49198 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:25.368205   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:25.368269   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.370479   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370739   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.370782   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370873   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.371063   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371395   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371593   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.371760   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.372083   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.372098   49198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:25.685250   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:25.685327   49198 machine.go:91] provisioned docker machine in 1.247541762s
	I1024 20:11:25.685347   49198 start.go:300] post-start starting for "embed-certs-867165" (driver="kvm2")
	I1024 20:11:25.685363   49198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:25.685388   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.685781   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:25.685813   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.688378   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688666   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.688712   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688886   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.689115   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.689274   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.689463   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.775321   49198 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:25.779494   49198 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:25.779516   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:25.779590   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:25.779663   49198 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:25.779748   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:25.788441   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:25.809843   49198 start.go:303] post-start completed in 124.478424ms
	I1024 20:11:25.809946   49198 fix.go:56] fixHost completed within 19.997269664s
	I1024 20:11:25.809985   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.812709   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813101   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.813133   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813265   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.813464   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813650   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.813962   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.814293   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.814309   49198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:25.925691   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178285.873274561
	
	I1024 20:11:25.925721   49198 fix.go:206] guest clock: 1698178285.873274561
	I1024 20:11:25.925731   49198 fix.go:219] Guest: 2023-10-24 20:11:25.873274561 +0000 UTC Remote: 2023-10-24 20:11:25.809967209 +0000 UTC m=+287.089115618 (delta=63.307352ms)
	I1024 20:11:25.925760   49198 fix.go:190] guest clock delta is within tolerance: 63.307352ms
	I1024 20:11:25.925767   49198 start.go:83] releasing machines lock for "embed-certs-867165", held for 20.113201351s
	I1024 20:11:25.925801   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.926046   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:25.928979   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929337   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.929369   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929547   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930171   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930239   49198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:25.930285   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.930332   49198 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:25.930356   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.932685   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.932918   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933167   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933197   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933225   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933254   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933377   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933548   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933600   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933758   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933773   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933941   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.934075   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:26.046804   49198 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:26.052139   49198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:26.195404   49198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:26.201515   49198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:26.201602   49198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:26.215298   49198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:26.215312   49198 start.go:472] detecting cgroup driver to use...
	I1024 20:11:26.215375   49198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:26.228683   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:26.240279   49198 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:26.240328   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:26.252314   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:26.264748   49198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:26.363370   49198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:26.472219   49198 docker.go:214] disabling docker service ...
	I1024 20:11:26.472293   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:26.485325   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:26.497949   49198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:26.614981   49198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:26.731140   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:26.750199   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:26.770158   49198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:26.770224   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.781180   49198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:26.781246   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.791901   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.802435   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.812848   49198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:26.826330   49198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:26.837268   49198 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:26.837350   49198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:26.853637   49198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:26.866347   49198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:26.985185   49198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:27.154650   49198 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:27.154718   49198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:27.160801   49198 start.go:540] Will wait 60s for crictl version
	I1024 20:11:27.160848   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:11:27.164920   49198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:27.202690   49198 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:27.202779   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.250594   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.296108   49198 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:25.950421   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Start
	I1024 20:11:25.950594   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring networks are active...
	I1024 20:11:25.951296   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network default is active
	I1024 20:11:25.951666   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network mk-default-k8s-diff-port-643126 is active
	I1024 20:11:25.952059   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Getting domain xml...
	I1024 20:11:25.952807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Creating domain...
	I1024 20:11:27.231286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting to get IP...
	I1024 20:11:27.232283   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232673   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232749   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.232677   50444 retry.go:31] will retry after 208.58934ms: waiting for machine to come up
	I1024 20:11:27.443376   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443879   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443919   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.443821   50444 retry.go:31] will retry after 257.382495ms: waiting for machine to come up
	I1024 20:11:27.703424   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.703968   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.704002   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.703931   50444 retry.go:31] will retry after 397.047762ms: waiting for machine to come up
	I1024 20:11:28.102593   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103138   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103169   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.103091   50444 retry.go:31] will retry after 512.560427ms: waiting for machine to come up
	I1024 20:11:27.297540   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:27.300396   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.300799   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:27.300829   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.301066   49198 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:27.305045   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:27.320300   49198 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:27.320366   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:27.359702   49198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:27.359766   49198 ssh_runner.go:195] Run: which lz4
	I1024 20:11:27.363540   49198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:27.367559   49198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:27.367583   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:28.616845   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617310   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.617240   50444 retry.go:31] will retry after 674.554893ms: waiting for machine to come up
	I1024 20:11:29.293139   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293640   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293667   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:29.293603   50444 retry.go:31] will retry after 903.982479ms: waiting for machine to come up
	I1024 20:11:30.199764   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200218   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:30.200119   50444 retry.go:31] will retry after 835.036056ms: waiting for machine to come up
	I1024 20:11:31.037123   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037584   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037609   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:31.037524   50444 retry.go:31] will retry after 1.242617103s: waiting for machine to come up
	I1024 20:11:32.281808   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282287   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282312   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:32.282243   50444 retry.go:31] will retry after 1.694327665s: waiting for machine to come up
	I1024 20:11:29.249631   49198 crio.go:444] Took 1.886122 seconds to copy over tarball
	I1024 20:11:29.249712   49198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:32.249370   49198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.999632152s)
	I1024 20:11:32.249396   49198 crio.go:451] Took 2.999736 seconds to extract the tarball
	I1024 20:11:32.249404   49198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:32.290929   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:32.335293   49198 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:32.335313   49198 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:32.335377   49198 ssh_runner.go:195] Run: crio config
	I1024 20:11:32.394988   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:32.395016   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:32.395039   49198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:32.395066   49198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-867165 NodeName:embed-certs-867165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:32.395267   49198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-867165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:32.395363   49198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-867165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:11:32.395412   49198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:32.408764   49198 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:32.408827   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:32.417504   49198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:11:32.433991   49198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:32.450599   49198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1024 20:11:32.467822   49198 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:32.471830   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:32.485398   49198 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165 for IP: 192.168.72.10
	I1024 20:11:32.485440   49198 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:32.485591   49198 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:32.485627   49198 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:32.485692   49198 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/client.key
	I1024 20:11:32.485751   49198 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key.802f554a
	I1024 20:11:32.485787   49198 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key
	I1024 20:11:32.485883   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:32.485913   49198 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:32.485924   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:32.485946   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:32.485974   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:32.485999   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:32.486054   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:32.486664   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:32.510981   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:32.533691   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:32.556372   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:32.578805   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:32.601563   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:32.624846   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:32.648498   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:32.672429   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:32.696146   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:32.719078   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:32.742894   49198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:32.758998   49198 ssh_runner.go:195] Run: openssl version
	I1024 20:11:32.764797   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:32.774075   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778755   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778809   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.784097   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:32.793365   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:32.802532   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806890   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806936   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.812430   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:32.821767   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:32.830930   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835401   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835455   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.840880   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:32.850124   49198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:32.854525   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:32.860161   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:32.866096   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:32.873246   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:32.880430   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:32.887436   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:32.892960   49198 kubeadm.go:404] StartCluster: {Name:embed-certs-867165 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:32.893073   49198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:32.893116   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:32.930748   49198 cri.go:89] found id: ""
	I1024 20:11:32.930817   49198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:32.939716   49198 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:32.939738   49198 kubeadm.go:636] restartCluster start
	I1024 20:11:32.939785   49198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:32.947747   49198 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.948905   49198 kubeconfig.go:92] found "embed-certs-867165" server: "https://192.168.72.10:8443"
	I1024 20:11:32.951235   49198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:32.959165   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.959215   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.970896   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.970912   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.970957   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.980621   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.481345   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:33.481442   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:33.492666   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.979087   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979490   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979520   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:33.979433   50444 retry.go:31] will retry after 1.877176786s: waiting for machine to come up
	I1024 20:11:35.859337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859758   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:35.859683   50444 retry.go:31] will retry after 2.235459842s: waiting for machine to come up
	I1024 20:11:38.097481   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097958   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:38.097878   50444 retry.go:31] will retry after 3.083066899s: waiting for machine to come up
	I1024 20:11:33.981370   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.077568   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.088845   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.481489   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.481554   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.492934   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.981614   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.981744   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.993154   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.480679   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.480752   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.492474   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.981612   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.981703   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.992389   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.480877   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.480982   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.492142   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.980700   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.980784   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.992439   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.480962   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.481040   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.492219   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.980706   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.980814   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.992040   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:38.481668   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.481764   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.493319   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.182306   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182647   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182674   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:41.182602   50444 retry.go:31] will retry after 3.348794863s: waiting for machine to come up
	I1024 20:11:38.981418   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.981504   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.992810   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.481357   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.481448   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.492521   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.981019   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.981109   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.992766   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.481341   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.481404   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.492180   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.981106   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.981205   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.991931   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.481563   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.481629   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.492601   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.981132   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.981226   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.992061   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.481647   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:42.481713   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:42.492524   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.960175   49198 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:11:42.960230   49198 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:11:42.960243   49198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:11:42.960322   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:42.998685   49198 cri.go:89] found id: ""
	I1024 20:11:42.998794   49198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:11:43.013829   49198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:11:43.023081   49198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:11:43.023161   49198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032165   49198 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032189   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:43.148027   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:45.942484   50077 start.go:369] acquired machines lock for "old-k8s-version-467375" in 2m12.988914754s
	I1024 20:11:45.942540   50077 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:45.942548   50077 fix.go:54] fixHost starting: 
	I1024 20:11:45.942969   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:45.943007   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:45.960424   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I1024 20:11:45.960851   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:45.961468   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:11:45.961498   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:45.961852   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:45.962045   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:11:45.962231   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:11:45.963803   50077 fix.go:102] recreateIfNeeded on old-k8s-version-467375: state=Stopped err=<nil>
	I1024 20:11:45.963841   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	W1024 20:11:45.964018   50077 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:45.965809   50077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467375" ...
	I1024 20:11:44.535120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535710   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Found IP for machine: 192.168.61.148
	I1024 20:11:44.535735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has current primary IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserving static IP address...
	I1024 20:11:44.536160   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserved static IP address: 192.168.61.148
	I1024 20:11:44.536181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for SSH to be available...
	I1024 20:11:44.536196   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.536225   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | skip adding static IP to network mk-default-k8s-diff-port-643126 - found existing host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"}
	I1024 20:11:44.536247   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Getting to WaitForSSH function...
	I1024 20:11:44.538297   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538634   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.538669   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH client type: external
	I1024 20:11:44.538846   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa (-rw-------)
	I1024 20:11:44.538897   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:44.538935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | About to run SSH command:
	I1024 20:11:44.538947   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | exit 0
	I1024 20:11:44.629136   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:44.629505   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetConfigRaw
	I1024 20:11:44.630190   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.632462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.632782   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.632807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.633035   49708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/config.json ...
	I1024 20:11:44.633215   49708 machine.go:88] provisioning docker machine ...
	I1024 20:11:44.633231   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:44.633416   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633566   49708 buildroot.go:166] provisioning hostname "default-k8s-diff-port-643126"
	I1024 20:11:44.633580   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633778   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.635853   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636184   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.636217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636295   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.636462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636608   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.636890   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.637307   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.637328   49708 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643126 && echo "default-k8s-diff-port-643126" | sudo tee /etc/hostname
	I1024 20:11:44.775436   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643126
	
	I1024 20:11:44.775468   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.778835   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779280   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.779316   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779494   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.779679   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779933   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.780147   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.780489   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.780518   49708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643126/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:44.921274   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:44.921332   49708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:44.921368   49708 buildroot.go:174] setting up certificates
	I1024 20:11:44.921385   49708 provision.go:83] configureAuth start
	I1024 20:11:44.921404   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.921747   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.924977   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925413   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.925443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925641   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.928106   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.928484   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928617   49708 provision.go:138] copyHostCerts
	I1024 20:11:44.928680   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:44.928703   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:44.928772   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:44.928918   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:44.928935   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:44.928969   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:44.929052   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:44.929063   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:44.929089   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:44.929157   49708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643126 san=[192.168.61.148 192.168.61.148 localhost 127.0.0.1 minikube default-k8s-diff-port-643126]
	I1024 20:11:45.170614   49708 provision.go:172] copyRemoteCerts
	I1024 20:11:45.170679   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:45.170706   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.173876   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.174251   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.174744   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.174909   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.175033   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.266012   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 20:11:45.294626   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:11:45.323773   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:45.347515   49708 provision.go:86] duration metric: configureAuth took 426.107365ms
	I1024 20:11:45.347536   49708 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:45.347741   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:45.347830   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.351151   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.351560   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351729   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.351898   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.352540   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.353017   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.353045   49708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:45.673767   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:45.673797   49708 machine.go:91] provisioned docker machine in 1.04057128s
	I1024 20:11:45.673809   49708 start.go:300] post-start starting for "default-k8s-diff-port-643126" (driver="kvm2")
	I1024 20:11:45.673821   49708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:45.673844   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.674180   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:45.674213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.677192   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677621   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.677663   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677817   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.678021   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.678180   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.678322   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.769507   49708 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:45.774136   49708 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:45.774161   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:45.774240   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:45.774333   49708 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:45.774456   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:45.782941   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:45.806536   49708 start.go:303] post-start completed in 132.710109ms
	I1024 20:11:45.806565   49708 fix.go:56] fixHost completed within 19.880653804s
	I1024 20:11:45.806602   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.809496   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.809854   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.809892   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.810096   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.810335   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810697   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.810870   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.811297   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.811312   49708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:45.942309   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178305.886866858
	
	I1024 20:11:45.942334   49708 fix.go:206] guest clock: 1698178305.886866858
	I1024 20:11:45.942343   49708 fix.go:219] Guest: 2023-10-24 20:11:45.886866858 +0000 UTC Remote: 2023-10-24 20:11:45.806569839 +0000 UTC m=+222.349889294 (delta=80.297019ms)
	I1024 20:11:45.942388   49708 fix.go:190] guest clock delta is within tolerance: 80.297019ms
	I1024 20:11:45.942399   49708 start.go:83] releasing machines lock for "default-k8s-diff-port-643126", held for 20.016514097s
	I1024 20:11:45.942428   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.942819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:45.946079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946507   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.946548   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946681   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947353   49708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:45.947411   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.947564   49708 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:45.947591   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.950504   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.950930   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.950984   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951010   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951176   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.951499   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.951522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.951526   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951638   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.951793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951946   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.952178   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.952345   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:46.043544   49708 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:46.072510   49708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:46.230010   49708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:46.237538   49708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:46.237608   49708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:46.259449   49708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:46.259468   49708 start.go:472] detecting cgroup driver to use...
	I1024 20:11:46.259530   49708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:46.278708   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:46.292769   49708 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:46.292827   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:46.311808   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:46.329420   49708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:46.452375   49708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:46.584041   49708 docker.go:214] disabling docker service ...
	I1024 20:11:46.584114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:46.606114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:46.623302   49708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:46.732771   49708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:46.862687   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:46.879573   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:46.900885   49708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:46.900955   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.911441   49708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:46.911500   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.921674   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.931937   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.942104   49708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:46.952610   49708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:46.961808   49708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:46.961884   49708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:46.977789   49708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:46.990089   49708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:47.130248   49708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:47.307336   49708 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:47.307402   49708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:47.316743   49708 start.go:540] Will wait 60s for crictl version
	I1024 20:11:47.316795   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:11:47.321526   49708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:47.369079   49708 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:47.369169   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.419428   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.477016   49708 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:45.967071   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Start
	I1024 20:11:45.967249   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring networks are active...
	I1024 20:11:45.967957   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network default is active
	I1024 20:11:45.968324   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network mk-old-k8s-version-467375 is active
	I1024 20:11:45.968743   50077 main.go:141] libmachine: (old-k8s-version-467375) Getting domain xml...
	I1024 20:11:45.969525   50077 main.go:141] libmachine: (old-k8s-version-467375) Creating domain...
	I1024 20:11:47.346548   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting to get IP...
	I1024 20:11:47.347505   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.347894   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.347980   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.347887   50579 retry.go:31] will retry after 232.244798ms: waiting for machine to come up
	I1024 20:11:47.581582   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.582087   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.582118   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.582044   50579 retry.go:31] will retry after 319.930019ms: waiting for machine to come up
	I1024 20:11:47.478565   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:47.481659   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482040   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:47.482066   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482265   49708 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:47.487054   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:47.499693   49708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:47.499765   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:47.551897   49708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:47.551978   49708 ssh_runner.go:195] Run: which lz4
	I1024 20:11:47.557026   49708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:47.562364   49708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:47.562393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:43.852350   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.048386   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.117774   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.202966   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:11:44.203042   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.215680   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.726471   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.226100   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.726494   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.226510   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.726607   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.758294   49198 api_server.go:72] duration metric: took 2.555329199s to wait for apiserver process to appear ...
	I1024 20:11:46.758319   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:11:46.758339   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.758872   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:46.758909   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.759318   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:47.260047   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.910793   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.910830   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:50.910852   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.943069   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.943100   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:51.259498   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.265278   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.265316   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:51.759494   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.767253   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.767280   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:52.259758   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:52.265202   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:11:52.277533   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:11:52.277561   49198 api_server.go:131] duration metric: took 5.51923389s to wait for apiserver health ...
	I1024 20:11:52.277572   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:52.277580   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:52.279542   49198 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:11:47.904065   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.904524   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.904551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.904467   50579 retry.go:31] will retry after 440.170251ms: waiting for machine to come up
	I1024 20:11:48.346206   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.346778   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.346802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.346686   50579 retry.go:31] will retry after 472.001777ms: waiting for machine to come up
	I1024 20:11:48.820100   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.820625   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.820660   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.820533   50579 retry.go:31] will retry after 487.055032ms: waiting for machine to come up
	I1024 20:11:49.309351   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:49.309816   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:49.309836   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:49.309751   50579 retry.go:31] will retry after 945.474211ms: waiting for machine to come up
	I1024 20:11:50.257106   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:50.257611   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:50.257641   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:50.257563   50579 retry.go:31] will retry after 915.312538ms: waiting for machine to come up
	I1024 20:11:51.174245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:51.174832   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:51.174889   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:51.174792   50579 retry.go:31] will retry after 1.09533855s: waiting for machine to come up
	I1024 20:11:52.271604   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:52.272082   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:52.272111   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:52.272041   50579 retry.go:31] will retry after 1.411155014s: waiting for machine to come up
	I1024 20:11:49.517078   49708 crio.go:444] Took 1.960093 seconds to copy over tarball
	I1024 20:11:49.517170   49708 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:53.113830   49708 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.596633239s)
	I1024 20:11:53.113858   49708 crio.go:451] Took 3.596755 seconds to extract the tarball
	I1024 20:11:53.113865   49708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:53.157476   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:53.204980   49708 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:53.205004   49708 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:53.205090   49708 ssh_runner.go:195] Run: crio config
	I1024 20:11:53.264588   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:11:53.264613   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:53.264634   49708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:53.264662   49708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.148 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643126 NodeName:default-k8s-diff-port-643126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:53.264869   49708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643126"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:53.264975   49708 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-643126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1024 20:11:53.265054   49708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:53.275886   49708 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:53.275982   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:53.286132   49708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1024 20:11:53.303735   49708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:53.319522   49708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1024 20:11:53.338388   49708 ssh_runner.go:195] Run: grep 192.168.61.148	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:53.343108   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:53.355662   49708 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126 for IP: 192.168.61.148
	I1024 20:11:53.355709   49708 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:53.355873   49708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:53.355910   49708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:53.356023   49708 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.key
	I1024 20:11:53.356086   49708 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key.8ba5a111
	I1024 20:11:53.356122   49708 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key
	I1024 20:11:53.356237   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:53.356265   49708 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:53.356275   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:53.356299   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:53.356320   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:53.356341   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:53.356377   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:53.357029   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:53.379968   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:53.401871   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:53.423699   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:53.445338   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:53.469994   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:53.495061   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:52.281055   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:11:52.299421   49198 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:11:52.322020   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:11:52.334273   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:11:52.334318   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:11:52.334332   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:11:52.334356   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:11:52.334372   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:11:52.334389   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:11:52.334401   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:11:52.334413   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:11:52.334425   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:11:52.334438   49198 system_pods.go:74] duration metric: took 12.395036ms to wait for pod list to return data ...
	I1024 20:11:52.334450   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:11:52.338486   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:11:52.338518   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:11:52.338530   49198 node_conditions.go:105] duration metric: took 4.073559ms to run NodePressure ...
	I1024 20:11:52.338555   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:55.075569   49198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.736987276s)
	I1024 20:11:55.075611   49198 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080481   49198 kubeadm.go:787] kubelet initialised
	I1024 20:11:55.080508   49198 kubeadm.go:788] duration metric: took 4.884507ms waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080519   49198 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:55.087371   49198 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.092583   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092616   49198 pod_ready.go:81] duration metric: took 5.215308ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.092627   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092636   49198 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.098518   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098540   49198 pod_ready.go:81] duration metric: took 5.887969ms waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.098551   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098560   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.103375   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103400   49198 pod_ready.go:81] duration metric: took 4.83092ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.103411   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103419   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.108416   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108443   49198 pod_ready.go:81] duration metric: took 5.016219ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.108454   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108462   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.482846   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482873   49198 pod_ready.go:81] duration metric: took 374.401616ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.482885   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482897   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.879895   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879922   49198 pod_ready.go:81] duration metric: took 397.016576ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.879935   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879947   49198 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:56.280405   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280445   49198 pod_ready.go:81] duration metric: took 400.488591ms waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:56.280464   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280475   49198 pod_ready.go:38] duration metric: took 1.19994252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:56.280498   49198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:11:56.298423   49198 ops.go:34] apiserver oom_adj: -16
	I1024 20:11:56.298445   49198 kubeadm.go:640] restartCluster took 23.358699894s
	I1024 20:11:56.298455   49198 kubeadm.go:406] StartCluster complete in 23.405500606s
	I1024 20:11:56.298474   49198 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.298551   49198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:11:56.300724   49198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.300999   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:11:56.301104   49198 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:11:56.301193   49198 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-867165"
	I1024 20:11:56.301203   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:56.301216   49198 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-867165"
	W1024 20:11:56.301261   49198 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:11:56.301260   49198 addons.go:69] Setting metrics-server=true in profile "embed-certs-867165"
	I1024 20:11:56.301290   49198 addons.go:69] Setting default-storageclass=true in profile "embed-certs-867165"
	I1024 20:11:56.301312   49198 addons.go:231] Setting addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:56.301315   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	W1024 20:11:56.301328   49198 addons.go:240] addon metrics-server should already be in state true
	I1024 20:11:56.301331   49198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-867165"
	I1024 20:11:56.301418   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.301743   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301744   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301767   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301771   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301826   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301867   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.307030   49198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-867165" context rescaled to 1 replicas
	I1024 20:11:56.307062   49198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:11:56.309053   49198 out.go:177] * Verifying Kubernetes components...
	I1024 20:11:56.310743   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:11:56.317523   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1024 20:11:56.317889   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.318430   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.318450   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.318881   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.319437   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.319486   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.320723   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1024 20:11:56.320906   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1024 20:11:56.321377   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.321491   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.322079   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322107   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322370   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322389   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322464   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322770   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.323410   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.323444   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.326654   49198 addons.go:231] Setting addon default-storageclass=true in "embed-certs-867165"
	W1024 20:11:56.326674   49198 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:11:56.326700   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.327084   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.327111   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.335811   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I1024 20:11:56.336310   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.336762   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.336774   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.337109   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.337272   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.338868   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.340964   49198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:11:56.342438   49198 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.342454   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:11:56.342472   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.341955   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I1024 20:11:56.343402   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.344019   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.344038   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.344502   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.344694   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.345753   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346097   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I1024 20:11:56.346367   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.346398   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346660   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.346666   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.346829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.348534   49198 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:11:53.684729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:53.685093   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:53.685129   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:53.685030   50579 retry.go:31] will retry after 1.793178726s: waiting for machine to come up
	I1024 20:11:55.481150   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:55.481696   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:55.481729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:55.481639   50579 retry.go:31] will retry after 2.680463816s: waiting for machine to come up
	I1024 20:11:56.347164   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.347192   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.350114   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.350141   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:11:56.350155   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:11:56.350174   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.350270   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.350397   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.350847   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.351478   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.351514   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.354060   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354451   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.354472   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354625   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.354819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.354978   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.355161   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.371309   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1024 20:11:56.371746   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.372300   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.372325   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.372764   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.372981   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.374651   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.374894   49198 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.374911   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:11:56.374934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.377962   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378385   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.378408   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378585   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.378789   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.378954   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.379083   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.471271   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.504355   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:11:56.504382   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:11:56.552351   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.576037   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:11:56.576068   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:11:56.606745   49198 node_ready.go:35] waiting up to 6m0s for node "embed-certs-867165" to be "Ready" ...
	I1024 20:11:56.606772   49198 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:11:56.620862   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:56.620897   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:11:56.676519   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:57.851757   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.380440836s)
	I1024 20:11:57.851814   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851816   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299429923s)
	I1024 20:11:57.851829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.851865   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851882   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852242   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852262   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852272   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852282   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852368   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852412   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852441   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852467   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852412   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852537   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852560   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852814   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852859   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852877   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860105   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183533543s)
	I1024 20:11:57.860176   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860195   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860492   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860494   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860515   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860526   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860537   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860828   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860857   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860876   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860886   49198 addons.go:467] Verifying addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:57.860990   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.861011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.861220   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.861227   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.861236   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.864370   49198 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1024 20:11:53.521030   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:53.844700   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:53.868393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:53.892495   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:53.916345   49708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:53.935576   49708 ssh_runner.go:195] Run: openssl version
	I1024 20:11:53.943066   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:53.957325   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.962959   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.963026   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.969104   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:53.980253   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:53.990977   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995906   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995992   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:54.001847   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:54.012635   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:54.023490   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028300   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028355   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.033965   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:54.044984   49708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:54.049588   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:54.055434   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:54.061692   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:54.068131   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:54.074484   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:54.080349   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:54.086499   49708 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-643126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:54.086598   49708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:54.086655   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:54.127406   49708 cri.go:89] found id: ""
	I1024 20:11:54.127494   49708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:54.137720   49708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:54.137743   49708 kubeadm.go:636] restartCluster start
	I1024 20:11:54.137801   49708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:54.147925   49708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.149006   49708 kubeconfig.go:92] found "default-k8s-diff-port-643126" server: "https://192.168.61.148:8444"
	I1024 20:11:54.151513   49708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:54.162303   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.162371   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.173715   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.173763   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.173816   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.184641   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.685342   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.685431   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.698640   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.185173   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.185284   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.201355   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.684814   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.684885   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.696664   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.185711   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.185795   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.201419   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.684932   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.685029   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.701458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.185009   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.185111   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.201166   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.685654   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.685739   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.701496   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:58.185022   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.185076   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.197394   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.865715   49198 addons.go:502] enable addons completed in 1.564611111s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1024 20:11:58.683275   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:58.163942   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:58.164342   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:58.164369   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:58.164308   50579 retry.go:31] will retry after 2.238050336s: waiting for machine to come up
	I1024 20:12:00.403552   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:00.403947   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:00.403975   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:00.403907   50579 retry.go:31] will retry after 3.901299207s: waiting for machine to come up
	I1024 20:11:58.685131   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.685225   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.700458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.184854   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.184936   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.200498   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.685159   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.685260   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.698793   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.185350   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.185418   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.200046   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.685255   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.685341   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.698229   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.185036   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.185105   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.200083   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.685617   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.685700   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.697442   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.184897   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.184980   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.196208   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.685769   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.685854   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.697356   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:03.184898   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.184977   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.196522   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.684425   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:12:01.683130   49198 node_ready.go:49] node "embed-certs-867165" has status "Ready":"True"
	I1024 20:12:01.683154   49198 node_ready.go:38] duration metric: took 5.076371929s waiting for node "embed-certs-867165" to be "Ready" ...
	I1024 20:12:01.683162   49198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:01.689566   49198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695393   49198 pod_ready.go:92] pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:01.695416   49198 pod_ready.go:81] duration metric: took 5.827696ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695427   49198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:03.712775   49198 pod_ready.go:102] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:04.306338   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:04.306804   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:04.306835   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:04.306770   50579 retry.go:31] will retry after 5.15211395s: waiting for machine to come up
	I1024 20:12:03.685737   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.685827   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.697510   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:04.163385   49708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:04.163416   49708 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:04.163449   49708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:04.163520   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:04.209780   49708 cri.go:89] found id: ""
	I1024 20:12:04.209834   49708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:04.226347   49708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:04.235134   49708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:04.235185   49708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243361   49708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243380   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:04.370510   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.461155   49708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090606159s)
	I1024 20:12:05.461192   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.649281   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.742338   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.829426   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:05.829494   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:05.841869   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.356907   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.856157   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.356140   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.856020   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.356129   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.382595   49708 api_server.go:72] duration metric: took 2.553177252s to wait for apiserver process to appear ...
	I1024 20:12:08.382622   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:08.382641   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:04.213550   49198 pod_ready.go:92] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.213573   49198 pod_ready.go:81] duration metric: took 2.518138084s waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.213585   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218813   49198 pod_ready.go:92] pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.218841   49198 pod_ready.go:81] duration metric: took 5.247061ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218855   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224562   49198 pod_ready.go:92] pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.224585   49198 pod_ready.go:81] duration metric: took 5.720637ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224597   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484197   49198 pod_ready.go:92] pod "kube-proxy-thkqr" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.484216   49198 pod_ready.go:81] duration metric: took 259.611869ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484224   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883941   49198 pod_ready.go:92] pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.883968   49198 pod_ready.go:81] duration metric: took 399.73679ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883982   49198 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:07.193414   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:10.878419   49071 start.go:369] acquired machines lock for "no-preload-014826" in 1m0.065559113s
	I1024 20:12:10.878467   49071 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:12:10.878475   49071 fix.go:54] fixHost starting: 
	I1024 20:12:10.878869   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:10.878901   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:10.898307   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I1024 20:12:10.898732   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:10.899250   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:12:10.899268   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:10.899614   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:10.899790   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:10.899933   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:12:10.901569   49071 fix.go:102] recreateIfNeeded on no-preload-014826: state=Stopped err=<nil>
	I1024 20:12:10.901593   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	W1024 20:12:10.901753   49071 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:12:10.904367   49071 out.go:177] * Restarting existing kvm2 VM for "no-preload-014826" ...
	I1024 20:12:09.462373   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.462813   50077 main.go:141] libmachine: (old-k8s-version-467375) Found IP for machine: 192.168.39.71
	I1024 20:12:09.462836   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserving static IP address...
	I1024 20:12:09.462853   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has current primary IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.463385   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.463423   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserved static IP address: 192.168.39.71
	I1024 20:12:09.463442   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | skip adding static IP to network mk-old-k8s-version-467375 - found existing host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"}
	I1024 20:12:09.463463   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Getting to WaitForSSH function...
	I1024 20:12:09.463484   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting for SSH to be available...
	I1024 20:12:09.465635   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.465951   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.465979   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.466131   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH client type: external
	I1024 20:12:09.466167   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa (-rw-------)
	I1024 20:12:09.466210   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:09.466227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | About to run SSH command:
	I1024 20:12:09.466256   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | exit 0
	I1024 20:12:09.565274   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:09.565647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetConfigRaw
	I1024 20:12:09.566251   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.569078   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.569585   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569863   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:12:09.570097   50077 machine.go:88] provisioning docker machine ...
	I1024 20:12:09.570122   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:09.570355   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570604   50077 buildroot.go:166] provisioning hostname "old-k8s-version-467375"
	I1024 20:12:09.570634   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570807   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.573170   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573560   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.573587   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573757   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.573934   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574080   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.574414   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.574840   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.574858   50077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467375 && echo "old-k8s-version-467375" | sudo tee /etc/hostname
	I1024 20:12:09.718150   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467375
	
	I1024 20:12:09.718201   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.721079   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721461   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.721495   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721653   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.721865   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722016   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722167   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.722324   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.722712   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.722732   50077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467375/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:09.865069   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:09.865098   50077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:09.865125   50077 buildroot.go:174] setting up certificates
	I1024 20:12:09.865136   50077 provision.go:83] configureAuth start
	I1024 20:12:09.865151   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.865449   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.868055   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868480   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.868513   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868693   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.870838   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871203   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.871227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871363   50077 provision.go:138] copyHostCerts
	I1024 20:12:09.871411   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:09.871423   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:09.871490   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:09.871613   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:09.871625   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:09.871655   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:09.871743   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:09.871753   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:09.871783   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:09.871856   50077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467375 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube old-k8s-version-467375]
	I1024 20:12:10.091178   50077 provision.go:172] copyRemoteCerts
	I1024 20:12:10.091229   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:10.091253   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.094245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094550   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.094590   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094759   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.094955   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.095123   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.095271   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.192715   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:12:10.216110   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:10.239468   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:12:10.263113   50077 provision.go:86] duration metric: configureAuth took 397.957727ms
	I1024 20:12:10.263138   50077 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:10.263366   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:12:10.263480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.265995   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266293   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.266334   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266467   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.266696   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.266863   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.267027   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.267168   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.267653   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.267677   50077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:10.596009   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:10.596032   50077 machine.go:91] provisioned docker machine in 1.025920355s
	I1024 20:12:10.596041   50077 start.go:300] post-start starting for "old-k8s-version-467375" (driver="kvm2")
	I1024 20:12:10.596050   50077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:10.596075   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.596415   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:10.596450   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.598886   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599234   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.599259   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599446   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.599647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.599812   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.599955   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.697045   50077 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:10.701363   50077 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:10.701387   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:10.701458   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:10.701546   50077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:10.701653   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:10.712072   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:10.737471   50077 start.go:303] post-start completed in 141.415073ms
	I1024 20:12:10.737508   50077 fix.go:56] fixHost completed within 24.794946143s
	I1024 20:12:10.737533   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.740438   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.740792   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.740820   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.741024   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.741247   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741428   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741691   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.741861   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.742407   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.742431   50077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:10.878250   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178330.824734287
	
	I1024 20:12:10.878273   50077 fix.go:206] guest clock: 1698178330.824734287
	I1024 20:12:10.878283   50077 fix.go:219] Guest: 2023-10-24 20:12:10.824734287 +0000 UTC Remote: 2023-10-24 20:12:10.737513672 +0000 UTC m=+157.935911605 (delta=87.220615ms)
	I1024 20:12:10.878307   50077 fix.go:190] guest clock delta is within tolerance: 87.220615ms
	I1024 20:12:10.878314   50077 start.go:83] releasing machines lock for "old-k8s-version-467375", held for 24.935800385s
	I1024 20:12:10.878347   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.878614   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:10.881335   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881746   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.881784   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881933   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882442   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882741   50077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:10.882801   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.882860   50077 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:10.882886   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.885640   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.885856   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886047   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886070   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886276   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886315   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886383   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886439   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886535   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886579   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886683   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886699   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.886816   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:11.006700   50077 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:11.012734   50077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:11.162399   50077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:11.169673   50077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:11.169751   50077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:11.184770   50077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:11.184794   50077 start.go:472] detecting cgroup driver to use...
	I1024 20:12:11.184858   50077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:11.202317   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:11.218122   50077 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:11.218187   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:11.233177   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:11.247591   50077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:11.387195   50077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:11.520544   50077 docker.go:214] disabling docker service ...
	I1024 20:12:11.520615   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:11.539166   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:11.552957   50077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:11.710494   50077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:11.837532   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:11.854418   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:11.874953   50077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 20:12:11.875040   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.887115   50077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:11.887206   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.898994   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.908652   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.918280   50077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:11.930870   50077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:11.939522   50077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:11.939580   50077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:11.955005   50077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:11.965173   50077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:12.098480   50077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:12.296897   50077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:12.296993   50077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:12.302906   50077 start.go:540] Will wait 60s for crictl version
	I1024 20:12:12.302956   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:12.307142   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:12.353253   50077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:12.353369   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.417241   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.486375   50077 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1024 20:12:12.487819   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:12.491366   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.491830   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:12.491862   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.492054   50077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:12.497705   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:12.514116   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:12:12.514208   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:12.569171   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:12.569247   50077 ssh_runner.go:195] Run: which lz4
	I1024 20:12:12.574729   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:12:12.579319   50077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:12:12.579364   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 20:12:10.905856   49071 main.go:141] libmachine: (no-preload-014826) Calling .Start
	I1024 20:12:10.906027   49071 main.go:141] libmachine: (no-preload-014826) Ensuring networks are active...
	I1024 20:12:10.906761   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network default is active
	I1024 20:12:10.907112   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network mk-no-preload-014826 is active
	I1024 20:12:10.907486   49071 main.go:141] libmachine: (no-preload-014826) Getting domain xml...
	I1024 20:12:10.908225   49071 main.go:141] libmachine: (no-preload-014826) Creating domain...
	I1024 20:12:12.324832   49071 main.go:141] libmachine: (no-preload-014826) Waiting to get IP...
	I1024 20:12:12.326055   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.326595   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.326695   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.326594   50821 retry.go:31] will retry after 197.462386ms: waiting for machine to come up
	I1024 20:12:12.526293   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.526743   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.526774   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.526720   50821 retry.go:31] will retry after 271.486585ms: waiting for machine to come up
	I1024 20:12:12.800360   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.801756   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.801940   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.801863   50821 retry.go:31] will retry after 486.882671ms: waiting for machine to come up
	I1024 20:12:12.479397   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.479431   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.479445   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:12.490441   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.490470   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.990764   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.006526   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.006556   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:13.490974   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.499731   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.499764   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:09.195216   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:11.694410   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.698362   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.991467   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:14.011775   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:12:14.048756   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:12:14.048791   49708 api_server.go:131] duration metric: took 5.666161032s to wait for apiserver health ...
	I1024 20:12:14.048802   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:12:14.048812   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:14.050652   49708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:14.052331   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:14.086953   49708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:14.142753   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:14.162085   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:12:14.162211   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:14.162246   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:12:14.162280   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:12:14.162307   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:12:14.162330   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:12:14.162352   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:12:14.162375   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:12:14.162411   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:12:14.162434   49708 system_pods.go:74] duration metric: took 19.657104ms to wait for pod list to return data ...
	I1024 20:12:14.162456   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:14.173042   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:14.173078   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:14.173093   49708 node_conditions.go:105] duration metric: took 10.618815ms to run NodePressure ...
	I1024 20:12:14.173117   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:14.763495   49708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768626   49708 kubeadm.go:787] kubelet initialised
	I1024 20:12:14.768653   49708 kubeadm.go:788] duration metric: took 5.128553ms waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768663   49708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:14.788128   49708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.800546   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800582   49708 pod_ready.go:81] duration metric: took 12.417978ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.800597   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800610   49708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.808416   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808448   49708 pod_ready.go:81] duration metric: took 7.821099ms waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.808463   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808472   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.814286   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814317   49708 pod_ready.go:81] duration metric: took 5.833548ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.814331   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814341   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.825548   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825582   49708 pod_ready.go:81] duration metric: took 11.230382ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.825596   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825606   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.168279   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168323   49708 pod_ready.go:81] duration metric: took 342.707312ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.168338   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168351   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.567697   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567735   49708 pod_ready.go:81] duration metric: took 399.371702ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.567750   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567838   49708 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.967716   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967750   49708 pod_ready.go:81] duration metric: took 399.892272ms waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.967764   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967773   49708 pod_ready.go:38] duration metric: took 1.199098599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:15.967793   49708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:12:15.986399   49708 ops.go:34] apiserver oom_adj: -16
	I1024 20:12:15.986422   49708 kubeadm.go:640] restartCluster took 21.848673162s
	I1024 20:12:15.986430   49708 kubeadm.go:406] StartCluster complete in 21.899940105s
	I1024 20:12:15.986444   49708 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.986545   49708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:12:15.989108   49708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.989647   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:15.989617   49708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:12:15.989715   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:12:15.989719   49708 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989736   49708 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-643126"
	W1024 20:12:15.989752   49708 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:12:15.989752   49708 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989775   49708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643126"
	I1024 20:12:15.989786   49708 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989802   49708 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:15.989804   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	W1024 20:12:15.989809   49708 addons.go:240] addon metrics-server should already be in state true
	I1024 20:12:15.989849   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:15.990183   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990192   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990246   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990294   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990209   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.995810   49708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643126" context rescaled to 1 replicas
	I1024 20:12:15.995838   49708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:12:15.998001   49708 out.go:177] * Verifying Kubernetes components...
	I1024 20:12:16.001589   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:12:16.010690   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I1024 20:12:16.011310   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.011861   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.011890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.012279   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.012906   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.012960   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.013706   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1024 20:12:16.014057   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.014533   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.014560   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.014905   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.015330   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I1024 20:12:16.015444   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.015486   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.015703   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.016168   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.016188   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.016591   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.016763   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.020428   49708 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-643126"
	W1024 20:12:16.020448   49708 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:12:16.020474   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:16.020840   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.020873   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.031538   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I1024 20:12:16.033822   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.034350   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.034367   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.034746   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.034802   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I1024 20:12:16.034978   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.035073   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.035525   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.035549   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.035943   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.036217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.036694   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.038891   49708 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:12:16.037871   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.040815   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:12:16.040832   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:12:16.040851   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.042238   49708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:14.393634   50077 crio.go:444] Took 1.818945 seconds to copy over tarball
	I1024 20:12:14.393720   50077 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:12:17.795931   50077 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.402175992s)
	I1024 20:12:17.795962   50077 crio.go:451] Took 3.402303 seconds to extract the tarball
	I1024 20:12:17.795974   50077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:12:17.841100   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:16.043742   49708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.043758   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:12:16.043775   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.046924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.047003   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047035   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.047068   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047224   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.049392   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049433   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.049469   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049487   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1024 20:12:16.049492   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.049976   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.050488   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.050502   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.050534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.050712   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.050810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.050844   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.050974   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.051292   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.051327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.051585   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.067412   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I1024 20:12:16.067810   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.068428   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.068445   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.068991   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.069222   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.070923   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.071196   49708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.071219   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:12:16.071238   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.074735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075400   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.075431   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075630   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.075796   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.075935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.076097   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.201177   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:12:16.201198   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:12:16.224757   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.247200   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:12:16.247225   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:12:16.259476   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.324327   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.324354   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:12:16.371331   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.384042   49708 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:16.384367   49708 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:12:17.654459   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.429657283s)
	I1024 20:12:17.654516   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.654529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.654951   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.654978   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.654990   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.655004   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.655016   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.655330   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.655353   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.672310   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.672337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.672693   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.672738   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.672761   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.138719   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.879209719s)
	I1024 20:12:18.138769   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.138783   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.139091   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139103   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139117   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.139132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139322   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139338   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139338   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.203722   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.832303736s)
	I1024 20:12:18.203776   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.203793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204088   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204106   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204118   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.204128   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204348   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.204378   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204393   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204406   49708 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:13.290974   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.291494   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.291524   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.291402   50821 retry.go:31] will retry after 588.738796ms: waiting for machine to come up
	I1024 20:12:13.882058   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.882661   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.882685   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.882577   50821 retry.go:31] will retry after 626.457323ms: waiting for machine to come up
	I1024 20:12:14.510560   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:14.511120   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:14.511159   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:14.511059   50821 retry.go:31] will retry after 848.521213ms: waiting for machine to come up
	I1024 20:12:15.360917   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:15.361423   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:15.361452   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:15.361397   50821 retry.go:31] will retry after 790.780783ms: waiting for machine to come up
	I1024 20:12:16.153815   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:16.154332   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:16.154364   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:16.154274   50821 retry.go:31] will retry after 1.066691012s: waiting for machine to come up
	I1024 20:12:17.222675   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:17.223280   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:17.223309   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:17.223248   50821 retry.go:31] will retry after 1.657285361s: waiting for machine to come up
	I1024 20:12:18.299768   49708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1024 20:12:16.196266   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.197531   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.397703   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:17.907894   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:18.029064   50077 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:18.029174   50077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.029196   50077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.029209   50077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.029219   50077 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.029403   50077 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 20:12:18.029418   50077 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.030719   50077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.030726   50077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.030730   50077 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 20:12:18.030748   50077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.030775   50077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.030801   50077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.030972   50077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.031077   50077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.180435   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.182586   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.185966   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1024 20:12:18.190926   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.196636   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.198176   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.205102   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.285789   50077 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1024 20:12:18.285837   50077 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.285889   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.356595   50077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1024 20:12:18.356639   50077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.356678   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.370773   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.387248   50077 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1024 20:12:18.387295   50077 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.387343   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.387461   50077 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1024 20:12:18.387488   50077 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1024 20:12:18.387530   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400566   50077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1024 20:12:18.400608   50077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.400647   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400660   50077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1024 20:12:18.400705   50077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.400742   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400754   50077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1024 20:12:18.400785   50077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.400812   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400845   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.400814   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.545451   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.545541   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1024 20:12:18.545587   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.545674   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.545724   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.545777   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1024 20:12:18.545734   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1024 20:12:18.683462   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 20:12:18.683513   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1024 20:12:18.683578   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.683656   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1024 20:12:18.683686   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1024 20:12:18.683732   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1024 20:12:18.688916   50077 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1024 20:12:18.688954   50077 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.689040   50077 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1024 20:12:20.355824   50077 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.666754363s)
	I1024 20:12:20.355859   50077 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1024 20:12:20.355920   50077 cache_images.go:92] LoadImages completed in 2.326833316s
	W1024 20:12:20.356004   50077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1024 20:12:20.356080   50077 ssh_runner.go:195] Run: crio config
	I1024 20:12:20.428753   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:20.428775   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:20.428793   50077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:20.428835   50077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467375 NodeName:old-k8s-version-467375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 20:12:20.429015   50077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467375"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467375
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.71:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:20.429115   50077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467375 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:20.429179   50077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1024 20:12:20.440158   50077 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:20.440239   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:20.450883   50077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1024 20:12:20.470913   50077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:20.490653   50077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1024 20:12:20.510287   50077 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:20.514815   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:20.526910   50077 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375 for IP: 192.168.39.71
	I1024 20:12:20.526943   50077 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:20.527172   50077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:20.527227   50077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:20.527313   50077 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.key
	I1024 20:12:20.527401   50077 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key.f4667c0f
	I1024 20:12:20.527458   50077 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key
	I1024 20:12:20.527617   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:20.527658   50077 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:20.527672   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:20.527712   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:20.527768   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:20.527803   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:20.527867   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:20.528563   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:20.561437   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:20.593396   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:20.626812   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 20:12:20.659073   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:20.690934   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:20.723550   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:20.754091   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:20.785078   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:20.813190   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:20.845338   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:20.876594   50077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:20.899560   50077 ssh_runner.go:195] Run: openssl version
	I1024 20:12:20.907482   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:20.922776   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929623   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929693   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.935454   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:20.947494   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:20.958906   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964115   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964177   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.970084   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:20.982477   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:20.995317   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000479   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000568   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.006797   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:21.020161   50077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:21.025037   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:21.033376   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:21.041858   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:21.050119   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:21.058140   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:21.066151   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:21.074299   50077 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:21.074409   50077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:21.074454   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:21.125486   50077 cri.go:89] found id: ""
	I1024 20:12:21.125559   50077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:21.139034   50077 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:21.139058   50077 kubeadm.go:636] restartCluster start
	I1024 20:12:21.139113   50077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:21.151994   50077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.153569   50077 kubeconfig.go:92] found "old-k8s-version-467375" server: "https://192.168.39.71:8443"
	I1024 20:12:21.157114   50077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:21.169908   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.169998   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.186116   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.186138   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.186187   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.201283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.702002   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.702084   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.717499   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.201839   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.201946   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.217814   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.702454   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.702525   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.720944   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:18.882382   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:18.882833   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:18.882869   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:18.882798   50821 retry.go:31] will retry after 1.854607935s: waiting for machine to come up
	I1024 20:12:20.738594   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:20.739327   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:20.739375   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:20.739255   50821 retry.go:31] will retry after 2.774006375s: waiting for machine to come up
	I1024 20:12:18.891092   49708 addons.go:502] enable addons completed in 2.901476764s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1024 20:12:20.898330   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:22.897985   49708 node_ready.go:49] node "default-k8s-diff-port-643126" has status "Ready":"True"
	I1024 20:12:22.898016   49708 node_ready.go:38] duration metric: took 6.51394456s waiting for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:22.898029   49708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:22.907326   49708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915330   49708 pod_ready.go:92] pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:22.915354   49708 pod_ready.go:81] duration metric: took 7.999933ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915366   49708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:20.698011   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.195726   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.201529   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.201620   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.215098   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.701482   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.701572   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.715481   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.201550   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.201610   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.218008   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.701489   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.701591   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.716960   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.201492   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.201558   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.215972   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.701398   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.701506   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.714016   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.201948   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.202018   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.215403   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.701876   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.701948   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.714598   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.202095   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.202161   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.215728   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.702476   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.702589   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.715925   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.514310   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:23.514813   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:23.514850   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:23.514763   50821 retry.go:31] will retry after 3.277478612s: waiting for machine to come up
	I1024 20:12:26.793845   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:26.794291   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:26.794312   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:26.794249   50821 retry.go:31] will retry after 4.518205069s: waiting for machine to come up
	I1024 20:12:24.934951   49708 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.934977   49708 pod_ready.go:81] duration metric: took 2.019602232s waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.934990   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940403   49708 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.940424   49708 pod_ready.go:81] duration metric: took 5.425415ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940437   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805106   49708 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:25.805127   49708 pod_ready.go:81] duration metric: took 864.682784ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805137   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.096987   49708 pod_ready.go:92] pod "kube-proxy-x4zbh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.097025   49708 pod_ready.go:81] duration metric: took 291.86715ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.097040   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497404   49708 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.497425   49708 pod_ready.go:81] duration metric: took 400.376909ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497444   49708 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.694439   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.192955   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.201919   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.201990   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.215407   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:28.701578   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.701658   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.714135   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.202433   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.202553   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.214936   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.702439   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.702499   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.714852   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.202428   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.202500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.214283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.702441   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.702500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.715562   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:31.170652   50077 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:31.170682   50077 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:31.170693   50077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:31.170772   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:31.231971   50077 cri.go:89] found id: ""
	I1024 20:12:31.232068   50077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:31.249451   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:31.261057   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:31.261124   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270878   50077 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270901   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:31.407803   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.357283   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.567466   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.659297   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.745553   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:32.745629   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:32.761052   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:31.314269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314887   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has current primary IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314912   49071 main.go:141] libmachine: (no-preload-014826) Found IP for machine: 192.168.50.162
	I1024 20:12:31.314926   49071 main.go:141] libmachine: (no-preload-014826) Reserving static IP address...
	I1024 20:12:31.315396   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.315434   49071 main.go:141] libmachine: (no-preload-014826) DBG | skip adding static IP to network mk-no-preload-014826 - found existing host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"}
	I1024 20:12:31.315448   49071 main.go:141] libmachine: (no-preload-014826) Reserved static IP address: 192.168.50.162
	I1024 20:12:31.315465   49071 main.go:141] libmachine: (no-preload-014826) Waiting for SSH to be available...
	I1024 20:12:31.315483   49071 main.go:141] libmachine: (no-preload-014826) DBG | Getting to WaitForSSH function...
	I1024 20:12:31.318209   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318611   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.318653   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318819   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH client type: external
	I1024 20:12:31.318871   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa (-rw-------)
	I1024 20:12:31.318916   49071 main.go:141] libmachine: (no-preload-014826) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:31.318941   49071 main.go:141] libmachine: (no-preload-014826) DBG | About to run SSH command:
	I1024 20:12:31.318957   49071 main.go:141] libmachine: (no-preload-014826) DBG | exit 0
	I1024 20:12:31.414054   49071 main.go:141] libmachine: (no-preload-014826) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:31.414566   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetConfigRaw
	I1024 20:12:31.415326   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.418120   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418549   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.418582   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418808   49071 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/config.json ...
	I1024 20:12:31.419009   49071 machine.go:88] provisioning docker machine ...
	I1024 20:12:31.419033   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:31.419222   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419399   49071 buildroot.go:166] provisioning hostname "no-preload-014826"
	I1024 20:12:31.419423   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419578   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.421861   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422241   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.422273   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422501   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.422676   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.422847   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.423066   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.423250   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.423707   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.423724   49071 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-014826 && echo "no-preload-014826" | sudo tee /etc/hostname
	I1024 20:12:31.557472   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-014826
	
	I1024 20:12:31.557504   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.560529   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.560928   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.560979   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.561201   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.561457   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561654   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561817   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.561968   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.562329   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.562357   49071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014826/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:31.694896   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:31.694927   49071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:31.694948   49071 buildroot.go:174] setting up certificates
	I1024 20:12:31.694959   49071 provision.go:83] configureAuth start
	I1024 20:12:31.694967   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.695264   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.697858   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698148   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.698176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698357   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.700982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701332   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.701364   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701570   49071 provision.go:138] copyHostCerts
	I1024 20:12:31.701625   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:31.701642   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:31.701733   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:31.701845   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:31.701857   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:31.701883   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:31.701947   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:31.701956   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:31.701978   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:31.702043   49071 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.no-preload-014826 san=[192.168.50.162 192.168.50.162 localhost 127.0.0.1 minikube no-preload-014826]
	I1024 20:12:31.798568   49071 provision.go:172] copyRemoteCerts
	I1024 20:12:31.798622   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:31.798642   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.801859   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802237   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.802269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802465   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.802672   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.802867   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.803027   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:31.891633   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:31.916451   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 20:12:31.937924   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:12:31.961360   49071 provision.go:86] duration metric: configureAuth took 266.390893ms
	I1024 20:12:31.961384   49071 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:31.961573   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:31.961660   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.964354   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964662   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.964719   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.965002   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965170   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965329   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.965516   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.965961   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.965983   49071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:32.275884   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:32.275911   49071 machine.go:91] provisioned docker machine in 856.887593ms
	I1024 20:12:32.275923   49071 start.go:300] post-start starting for "no-preload-014826" (driver="kvm2")
	I1024 20:12:32.275935   49071 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:32.275957   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.276268   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:32.276298   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.279248   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279642   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.279678   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.279985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.280182   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.280455   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.371931   49071 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:32.375989   49071 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:32.376009   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:32.376077   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:32.376173   49071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:32.376295   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:32.385018   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:32.408697   49071 start.go:303] post-start completed in 132.759815ms
	I1024 20:12:32.408719   49071 fix.go:56] fixHost completed within 21.530244363s
	I1024 20:12:32.408744   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.411800   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412155   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.412189   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412363   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.412574   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412741   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412916   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.413083   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:32.413469   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:32.413483   49071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:32.534092   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178352.477877903
	
	I1024 20:12:32.534116   49071 fix.go:206] guest clock: 1698178352.477877903
	I1024 20:12:32.534127   49071 fix.go:219] Guest: 2023-10-24 20:12:32.477877903 +0000 UTC Remote: 2023-10-24 20:12:32.408724059 +0000 UTC m=+364.183674654 (delta=69.153844ms)
	I1024 20:12:32.534153   49071 fix.go:190] guest clock delta is within tolerance: 69.153844ms
	I1024 20:12:32.534159   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 21.655714466s
	I1024 20:12:32.534185   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.534468   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:32.537523   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.537932   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.537961   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.538160   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538690   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538919   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.539004   49071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:32.539089   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.539138   49071 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:32.539166   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.542176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542308   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542652   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542689   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542714   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542732   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542981   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.542985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.543207   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543214   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543387   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543429   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543573   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.543579   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.631242   49071 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:32.657695   49071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:32.808471   49071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:32.815640   49071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:32.815712   49071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:32.830198   49071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:32.830219   49071 start.go:472] detecting cgroup driver to use...
	I1024 20:12:32.830295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:32.845231   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:32.863283   49071 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:32.863328   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:32.878295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:32.894182   49071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:33.024491   49071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:33.156548   49071 docker.go:214] disabling docker service ...
	I1024 20:12:33.156621   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:33.169940   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:33.182368   49071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:28.804366   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.806145   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.806217   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.193022   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.195173   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:33.297156   49071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:33.434526   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:33.453482   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:33.471594   49071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:12:33.471665   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.481491   49071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:33.481563   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.490505   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.500003   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.509825   49071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:33.524014   49071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:33.532876   49071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:33.532936   49071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:33.545922   49071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:33.554519   49071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:33.661858   49071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:33.867286   49071 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:33.867361   49071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:33.873180   49071 start.go:540] Will wait 60s for crictl version
	I1024 20:12:33.873259   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:33.877238   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:33.918479   49071 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:33.918624   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:33.970986   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:34.026667   49071 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:12:33.278190   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:33.777448   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.277381   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.320204   50077 api_server.go:72] duration metric: took 1.574651034s to wait for apiserver process to appear ...
	I1024 20:12:34.320230   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:34.320258   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.320744   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.320773   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.321162   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.821724   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.028144   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:34.031311   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031699   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:34.031733   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031888   49071 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:34.036386   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:34.052307   49071 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:12:34.052360   49071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:34.099209   49071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:12:34.099236   49071 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:34.099291   49071 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.099414   49071 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.099497   49071 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 20:12:34.099512   49071 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.099547   49071 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.099575   49071 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101069   49071 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.101083   49071 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.101096   49071 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 20:12:34.101077   49071 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101135   49071 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.101147   49071 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.101173   49071 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.101428   49071 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.283586   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.292930   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.294280   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.303296   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1024 20:12:34.314337   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.323356   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.327726   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.373724   49071 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1024 20:12:34.373774   49071 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.373819   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.466499   49071 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1024 20:12:34.466540   49071 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.466582   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.487167   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.489929   49071 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1024 20:12:34.489986   49071 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.490027   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588137   49071 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1024 20:12:34.588178   49071 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.588206   49071 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1024 20:12:34.588231   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588248   49071 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.588286   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588308   49071 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1024 20:12:34.588330   49071 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.588340   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.588358   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588388   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.588410   49071 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1024 20:12:34.588427   49071 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.588447   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588448   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.605099   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.693897   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.694097   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1024 20:12:34.694204   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.707142   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.707184   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.707265   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1024 20:12:34.707388   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:34.707384   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1024 20:12:34.707516   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:34.722106   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1024 20:12:34.722205   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:34.776997   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1024 20:12:34.777019   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777067   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777089   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1024 20:12:34.777180   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:34.804122   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1024 20:12:34.804241   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:34.814486   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 20:12:34.814532   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1024 20:12:34.814567   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1024 20:12:34.814607   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1024 20:12:34.814634   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:38.115460   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (3.338366217s)
	I1024 20:12:38.115492   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1024 20:12:38.115516   49071 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115548   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (3.338341429s)
	I1024 20:12:38.115570   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115586   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1024 20:12:38.115618   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3: (3.311351093s)
	I1024 20:12:38.115644   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1024 20:12:38.115650   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.30100028s)
	I1024 20:12:38.115665   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1024 20:12:34.807460   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.307370   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:34.696540   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.192160   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.822511   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1024 20:12:39.822561   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.734083   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.734125   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.734161   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.777985   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.778037   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.822134   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.042292   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.042343   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.321887   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.363625   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.363682   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.821995   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.828080   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.828114   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:42.321381   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:42.331626   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:12:42.342584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:12:42.342614   50077 api_server.go:131] duration metric: took 8.022377051s to wait for apiserver health ...
	I1024 20:12:42.342626   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:42.342634   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:42.344676   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:42.346118   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:42.363399   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:42.389481   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:42.403326   50077 system_pods.go:59] 7 kube-system pods found
	I1024 20:12:42.403370   50077 system_pods.go:61] "coredns-5644d7b6d9-x567q" [1dc7f1c2-4997-4330-a9bc-b914b1c1db9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:42.403381   50077 system_pods.go:61] "etcd-old-k8s-version-467375" [62c8ab28-033f-43fa-96b2-e127d8d46730] Running
	I1024 20:12:42.403389   50077 system_pods.go:61] "kube-apiserver-old-k8s-version-467375" [87c58a79-9f12-4be3-a450-69aa22674541] Running
	I1024 20:12:42.403398   50077 system_pods.go:61] "kube-controller-manager-old-k8s-version-467375" [6bf66f9f-1431-4b3f-b186-528945c54a63] Running
	I1024 20:12:42.403412   50077 system_pods.go:61] "kube-proxy-jdvck" [d35f42b9-9be8-43ee-8434-3d557e31bfde] Running
	I1024 20:12:42.403418   50077 system_pods.go:61] "kube-scheduler-old-k8s-version-467375" [63ae0d31-ace3-4490-a2e8-ed110e3a1072] Running
	I1024 20:12:42.403424   50077 system_pods.go:61] "storage-provisioner" [9105f8d8-3aa1-422d-acf2-9f83e9ede8af] Running
	I1024 20:12:42.403431   50077 system_pods.go:74] duration metric: took 13.927429ms to wait for pod list to return data ...
	I1024 20:12:42.403440   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:42.408844   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:42.408890   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:42.408905   50077 node_conditions.go:105] duration metric: took 5.459392ms to run NodePressure ...
	I1024 20:12:42.408926   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:42.701645   50077 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:42.707084   50077 retry.go:31] will retry after 366.455415ms: kubelet not initialised
	I1024 20:12:39.807495   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:42.306172   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.193434   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:41.195135   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.694847   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.078083   50077 retry.go:31] will retry after 411.231242ms: kubelet not initialised
	I1024 20:12:43.494711   50077 retry.go:31] will retry after 768.972767ms: kubelet not initialised
	I1024 20:12:44.268690   50077 retry.go:31] will retry after 693.655783ms: kubelet not initialised
	I1024 20:12:45.186580   50077 retry.go:31] will retry after 1.610937297s: kubelet not initialised
	I1024 20:12:46.803897   50077 retry.go:31] will retry after 959.133509ms: kubelet not initialised
	I1024 20:12:47.768260   50077 retry.go:31] will retry after 1.51466069s: kubelet not initialised
	I1024 20:12:45.464752   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.34915976s)
	I1024 20:12:45.464779   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1024 20:12:45.464821   49071 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:45.464899   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:46.936699   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.471766425s)
	I1024 20:12:46.936725   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1024 20:12:46.936750   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:46.936790   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:44.806094   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:46.807137   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:45.696196   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:48.192732   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:49.288179   50077 retry.go:31] will retry after 5.048749504s: kubelet not initialised
	I1024 20:12:49.615688   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.678859869s)
	I1024 20:12:49.615726   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1024 20:12:49.615763   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:49.615840   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:51.387159   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.771279542s)
	I1024 20:12:51.387185   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1024 20:12:51.387209   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:51.387258   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:52.868127   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.480840395s)
	I1024 20:12:52.868158   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1024 20:12:52.868184   49071 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:52.868233   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:49.304156   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:51.305456   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:53.307726   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:50.195756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:52.196133   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.342759   50077 retry.go:31] will retry after 8.402807892s: kubelet not initialised
	I1024 20:12:53.617841   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1024 20:12:53.617883   49071 cache_images.go:123] Successfully loaded all cached images
	I1024 20:12:53.617889   49071 cache_images.go:92] LoadImages completed in 19.518639759s
	I1024 20:12:53.617972   49071 ssh_runner.go:195] Run: crio config
	I1024 20:12:53.677157   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:12:53.677181   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:53.677198   49071 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:53.677215   49071 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.162 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014826 NodeName:no-preload-014826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:12:53.677386   49071 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014826"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:53.677482   49071 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-014826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:53.677552   49071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:12:53.688840   49071 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:53.688904   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:53.700095   49071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:12:53.717176   49071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:53.737316   49071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1024 20:12:53.756100   49071 ssh_runner.go:195] Run: grep 192.168.50.162	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:53.760013   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:53.771571   49071 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826 for IP: 192.168.50.162
	I1024 20:12:53.771601   49071 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:53.771752   49071 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:53.771811   49071 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:53.771896   49071 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.key
	I1024 20:12:53.771975   49071 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key.1b8245f8
	I1024 20:12:53.772056   49071 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key
	I1024 20:12:53.772205   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:53.772250   49071 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:53.772262   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:53.772303   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:53.772333   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:53.772354   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:53.772397   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:53.773081   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:53.797387   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:53.822084   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:53.846401   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:12:53.869361   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:53.891519   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:53.914051   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:53.935925   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:53.958389   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:53.982011   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:54.005921   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:54.029793   49071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:54.047319   49071 ssh_runner.go:195] Run: openssl version
	I1024 20:12:54.053493   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:54.064414   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069060   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069115   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.075137   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:54.088046   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:54.099949   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104810   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104867   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.110617   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:54.122160   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:54.133062   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137858   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137922   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.144146   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:54.155998   49071 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:54.160989   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:54.167441   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:54.173797   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:54.180320   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:54.186876   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:54.193624   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:54.200066   49071 kubeadm.go:404] StartCluster: {Name:no-preload-014826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:54.200165   49071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:54.200202   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:54.253207   49071 cri.go:89] found id: ""
	I1024 20:12:54.253267   49071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:54.264316   49071 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:54.264348   49071 kubeadm.go:636] restartCluster start
	I1024 20:12:54.264404   49071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:54.276382   49071 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.277506   49071 kubeconfig.go:92] found "no-preload-014826" server: "https://192.168.50.162:8443"
	I1024 20:12:54.279888   49071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:54.290005   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.290052   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.302383   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.302400   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.302447   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.315130   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.815483   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.815574   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.827862   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.315372   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.315430   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.328409   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.816079   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.816141   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.829755   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.315782   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.315869   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.329006   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.815526   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.815621   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.828167   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.315692   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.315781   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.328590   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.816175   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.816250   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.832014   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.805830   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.810013   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.692702   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.192210   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.750533   50077 retry.go:31] will retry after 7.667287878s: kubelet not initialised
	I1024 20:12:58.315841   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.315922   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.329743   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:58.815711   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.815779   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.828215   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.315817   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.315924   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.328911   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.815493   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.815583   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.829684   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.316215   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.316294   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.330227   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.815830   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.815901   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.828290   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.315228   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.315319   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.329972   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.815426   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.815495   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.829199   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.315754   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.315834   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.328463   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.816091   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.816175   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.830548   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.304116   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.304336   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:59.193761   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:01.692343   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.693961   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.315186   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.315249   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.327729   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:03.815302   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.815389   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.827308   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:04.290952   49071 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:13:04.290993   49071 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:13:04.291005   49071 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:13:04.291078   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:13:04.333468   49071 cri.go:89] found id: ""
	I1024 20:13:04.333543   49071 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:13:04.351889   49071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:13:04.362176   49071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:13:04.362251   49071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372650   49071 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372683   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:04.495803   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.080838   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.290640   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.379839   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.458741   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:13:05.458843   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.475039   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.997438   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.496596   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.996587   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.496933   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.514268   49071 api_server.go:72] duration metric: took 2.055524654s to wait for apiserver process to appear ...
	I1024 20:13:07.514294   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:13:07.514310   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.514802   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:07.514840   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.515243   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:08.015912   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:04.306097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:06.805484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:05.698099   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:08.196336   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.424613   50077 retry.go:31] will retry after 17.161095389s: kubelet not initialised
	I1024 20:13:12.512885   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.512923   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.512936   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.564368   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.564415   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.564435   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.578188   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.578210   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:13.015415   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.022900   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.022939   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:09.305906   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:11.805107   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.693989   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:12.696233   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:13.515731   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.520510   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.520565   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:14.015693   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:14.021308   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:13:14.029247   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:13:14.029271   49071 api_server.go:131] duration metric: took 6.514969351s to wait for apiserver health ...
	I1024 20:13:14.029281   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:13:14.029289   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:13:14.031023   49071 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:13:14.032390   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:13:14.042542   49071 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:13:14.061827   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:13:14.077006   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:13:14.077041   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:13:14.077058   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:13:14.077068   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:13:14.077078   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:13:14.077088   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:13:14.077102   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:13:14.077114   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:13:14.077125   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:13:14.077140   49071 system_pods.go:74] duration metric: took 15.296766ms to wait for pod list to return data ...
	I1024 20:13:14.077150   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:13:14.080871   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:13:14.080896   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:13:14.080908   49071 node_conditions.go:105] duration metric: took 3.7473ms to run NodePressure ...
	I1024 20:13:14.080921   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:14.292868   49071 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297583   49071 kubeadm.go:787] kubelet initialised
	I1024 20:13:14.297611   49071 kubeadm.go:788] duration metric: took 4.717728ms waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297621   49071 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:14.303742   49071 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.309570   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309600   49071 pod_ready.go:81] duration metric: took 5.835917ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.309608   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309616   49071 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.316423   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316453   49071 pod_ready.go:81] duration metric: took 6.829373ms waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.316577   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316593   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.325238   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325271   49071 pod_ready.go:81] duration metric: took 8.669582ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.325280   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325288   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.466293   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466319   49071 pod_ready.go:81] duration metric: took 141.023699ms waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.466331   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466342   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.865820   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865855   49071 pod_ready.go:81] duration metric: took 399.504017ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.865867   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865876   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.266786   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266820   49071 pod_ready.go:81] duration metric: took 400.936146ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.266833   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266844   49071 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.666547   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666582   49071 pod_ready.go:81] duration metric: took 399.72944ms waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.666596   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666617   49071 pod_ready.go:38] duration metric: took 1.368975115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:15.666636   49071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:13:15.686675   49071 ops.go:34] apiserver oom_adj: -16
	I1024 20:13:15.686696   49071 kubeadm.go:640] restartCluster took 21.422341568s
	I1024 20:13:15.686706   49071 kubeadm.go:406] StartCluster complete in 21.486646231s
	I1024 20:13:15.686737   49071 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.686823   49071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:13:15.688903   49071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.689192   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:13:15.689321   49071 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:13:15.689405   49071 addons.go:69] Setting storage-provisioner=true in profile "no-preload-014826"
	I1024 20:13:15.689423   49071 addons.go:231] Setting addon storage-provisioner=true in "no-preload-014826"
	I1024 20:13:15.689462   49071 addons.go:69] Setting metrics-server=true in profile "no-preload-014826"
	I1024 20:13:15.689490   49071 addons.go:231] Setting addon metrics-server=true in "no-preload-014826"
	W1024 20:13:15.689512   49071 addons.go:240] addon metrics-server should already be in state true
	I1024 20:13:15.689560   49071 host.go:66] Checking if "no-preload-014826" exists ...
	W1024 20:13:15.689463   49071 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:13:15.689649   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.689445   49071 addons.go:69] Setting default-storageclass=true in profile "no-preload-014826"
	I1024 20:13:15.689716   49071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014826"
	I1024 20:13:15.689431   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:13:15.690018   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690051   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690060   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690086   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690173   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690225   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.695832   49071 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-014826" context rescaled to 1 replicas
	I1024 20:13:15.695868   49071 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:13:15.698104   49071 out.go:177] * Verifying Kubernetes components...
	I1024 20:13:15.701812   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:13:15.708637   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I1024 20:13:15.709086   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.709579   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I1024 20:13:15.709941   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.709959   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710044   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.710478   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.710629   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.710640   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710943   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.710954   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711125   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.711367   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1024 20:13:15.711702   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.711739   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711852   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.712441   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.712453   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.713081   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.713312   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.717141   49071 addons.go:231] Setting addon default-storageclass=true in "no-preload-014826"
	W1024 20:13:15.717173   49071 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:13:15.717201   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.717655   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.717688   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.729423   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I1024 20:13:15.730145   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.730747   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.730763   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.730811   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1024 20:13:15.731224   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.731294   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.731487   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.731691   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.731704   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.732239   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.732712   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.733909   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736374   49071 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:13:15.734682   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736231   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1024 20:13:15.738165   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:13:15.738178   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:13:15.738198   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739819   49071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:13:15.741717   49071 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.741733   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:13:15.741752   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739693   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.742202   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.742374   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.742389   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.742978   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.743000   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.743088   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.743253   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.743408   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.743896   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.744551   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.745028   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.745145   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745266   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.745462   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.745486   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745735   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.745870   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.745956   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.746023   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.782650   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I1024 20:13:15.783126   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.783699   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.783721   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.784051   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.784270   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.786114   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.786409   49071 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.786424   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:13:15.786439   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.788982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789347   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.789376   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789622   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.789838   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.790047   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.790195   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.870753   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:13:15.870771   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:13:15.893772   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:13:15.893799   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:13:15.916179   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.928570   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.928596   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:13:15.950610   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.987129   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.987945   49071 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:13:15.987993   49071 node_ready.go:35] waiting up to 6m0s for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53431699s)
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.499892733s)
	I1024 20:13:17.450586   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450597   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.450609   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450621   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451126   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451143   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451152   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451160   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451176   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451180   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451186   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451190   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451200   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451211   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451380   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451410   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451415   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451429   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451430   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451442   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.464276   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.464297   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.464561   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.464578   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.464585   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626276   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.639098267s)
	I1024 20:13:17.626344   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626364   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.626686   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.626711   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.626713   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626765   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626779   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.627054   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.627071   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.627082   49071 addons.go:467] Verifying addon metrics-server=true in "no-preload-014826"
	I1024 20:13:17.629289   49071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:13:17.630781   49071 addons.go:502] enable addons completed in 1.94145774s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:13:18.084997   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:13.805526   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.807970   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:18.305400   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.194668   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:17.694096   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:20.085063   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:22.086260   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:23.087300   49071 node_ready.go:49] node "no-preload-014826" has status "Ready":"True"
	I1024 20:13:23.087338   49071 node_ready.go:38] duration metric: took 7.0993157s waiting for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:23.087350   49071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:23.093785   49071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101553   49071 pod_ready.go:92] pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:23.101576   49071 pod_ready.go:81] duration metric: took 7.766543ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101588   49071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:20.808097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:23.306150   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:19.696002   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:22.195097   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.592041   50077 kubeadm.go:787] kubelet initialised
	I1024 20:13:27.592064   50077 kubeadm.go:788] duration metric: took 44.890387595s waiting for restarted kubelet to initialise ...
	I1024 20:13:27.592071   50077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:27.596611   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601949   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.601972   50077 pod_ready.go:81] duration metric: took 5.342417ms waiting for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601979   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607096   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.607118   50077 pod_ready.go:81] duration metric: took 5.132259ms waiting for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607130   50077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.611971   50077 pod_ready.go:92] pod "etcd-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.611991   50077 pod_ready.go:81] duration metric: took 4.854068ms waiting for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.612002   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.616975   50077 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.616995   50077 pod_ready.go:81] duration metric: took 4.985984ms waiting for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.617006   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620272   49071 pod_ready.go:92] pod "etcd-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.620294   49071 pod_ready.go:81] duration metric: took 1.518699618s waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620304   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625954   49071 pod_ready.go:92] pod "kube-apiserver-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.625975   49071 pod_ready.go:81] duration metric: took 5.666043ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625985   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096309   49071 pod_ready.go:92] pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.096338   49071 pod_ready.go:81] duration metric: took 2.470345358s waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096363   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101417   49071 pod_ready.go:92] pod "kube-proxy-hvphg" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.101439   49071 pod_ready.go:81] duration metric: took 5.060638ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101457   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487627   49071 pod_ready.go:92] pod "kube-scheduler-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.487655   49071 pod_ready.go:81] duration metric: took 386.189892ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487668   49071 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:25.805375   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:28.304314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:24.199489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:26.694339   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.990781   50077 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.990808   50077 pod_ready.go:81] duration metric: took 373.794401ms waiting for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.990817   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389532   50077 pod_ready.go:92] pod "kube-proxy-jdvck" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.389554   50077 pod_ready.go:81] duration metric: took 398.730628ms waiting for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389562   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791217   50077 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.791245   50077 pod_ready.go:81] duration metric: took 401.675656ms waiting for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791259   50077 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:31.101273   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.797752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.294823   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:30.305423   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.804966   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.196181   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:31.694405   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:33.597846   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.098571   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.295326   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.295502   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:35.307544   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:37.804734   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.193583   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.194545   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.693640   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.598114   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.295582   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.797360   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.303674   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:42.305932   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:41.193409   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.694630   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.097684   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.599550   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.295412   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.295801   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.795437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:44.806885   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.305513   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.695737   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.194597   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.098390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.098465   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.598464   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.796354   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.296299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.806019   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.304671   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.692678   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.693810   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.099808   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.596982   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.795042   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.795788   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.305480   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.805003   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.192666   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.192992   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.598091   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:02.097277   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.296748   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.799381   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.304665   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.305140   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.193682   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.694286   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.098871   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.598019   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.297114   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.796174   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:03.804391   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:05.805262   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.304535   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.692751   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.693756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.598278   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.598744   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:09.296355   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.794188   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.805023   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.304639   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.193179   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.696086   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.097069   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.598606   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.795184   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.797064   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.804980   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.304229   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:16.193316   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.193452   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.099418   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.597767   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.598478   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.294610   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.295299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.295580   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.304386   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.304955   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.693442   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.695298   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.598688   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.098094   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.796039   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.294583   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.804411   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:26.805975   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:25.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.194309   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.098448   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.597809   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.295004   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.296770   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.302945   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.303224   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.305333   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.693713   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.693887   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.695638   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.599337   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.098527   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.795335   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.796128   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.798347   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.307171   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.806058   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.192382   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.195932   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.098830   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.598203   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.295075   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.796827   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.304919   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.805069   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.693934   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.694102   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.598267   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.097792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:45.297437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.795616   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.805647   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:46.806849   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.695195   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.194156   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.597390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.099367   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:50.294686   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.297230   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.306571   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.804484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.194481   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.693650   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.694257   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.597760   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.597897   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.794752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.795666   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.805053   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.303997   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.304326   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.693506   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.098488   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.098937   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.297834   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.795492   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.305557   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:02.805113   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.694107   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.194559   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.597853   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.598764   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.798231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.296567   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:04.805204   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.806277   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.693959   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.194793   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.098369   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.099343   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.597632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.795941   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.295163   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:09.303880   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.308399   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.692947   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.694115   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.098788   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.297546   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.799219   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.804941   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.805508   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.805620   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.194071   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.692344   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.099461   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.598528   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:18.294855   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.795197   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.303894   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.807109   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:19.693273   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:21.694158   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.694489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:24.598739   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.610829   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.295231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.296151   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.794796   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.304009   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.304056   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:28.692475   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.097722   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.099314   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.795050   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.795981   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.304915   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.306232   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:30.693731   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.193919   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.100924   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.597972   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:37.598135   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:34.295967   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.297180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.809488   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.305924   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.696190   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.193380   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:42.597443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.794953   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.794982   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.806251   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:41.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.694041   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.192299   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:44.598402   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.097519   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.294813   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.297991   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.794454   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.803978   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.804440   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.805016   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.192754   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.693494   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.098171   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.598327   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.795988   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.296853   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.806503   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.807986   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:50.193124   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.692831   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.097085   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.600496   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.795189   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.795825   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.304728   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.305314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.696873   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:57.193194   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.098128   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.099894   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.295180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.295325   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:58.804230   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:00.804430   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.303762   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.193752   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.194280   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.694730   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.597363   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.598434   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.599790   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.295998   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.298356   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.795402   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.305076   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.805412   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:04.884378   49198 pod_ready.go:81] duration metric: took 4m0.000380407s waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:04.884408   49198 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:04.884437   49198 pod_ready.go:38] duration metric: took 4m3.201253081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:04.884459   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:04.884488   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:04.884542   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:04.941853   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:04.941878   49198 cri.go:89] found id: ""
	I1024 20:16:04.941889   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:04.941963   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.947250   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:04.947317   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:04.990126   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:04.990151   49198 cri.go:89] found id: ""
	I1024 20:16:04.990163   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:04.990226   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.995026   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:04.995086   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:05.045422   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.045441   49198 cri.go:89] found id: ""
	I1024 20:16:05.045449   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:05.045505   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.049931   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:05.049997   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:05.115746   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.115767   49198 cri.go:89] found id: ""
	I1024 20:16:05.115775   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:05.115822   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.120476   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:05.120527   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:05.163487   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.163509   49198 cri.go:89] found id: ""
	I1024 20:16:05.163521   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:05.163580   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.167956   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:05.168027   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:05.209375   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.209403   49198 cri.go:89] found id: ""
	I1024 20:16:05.209412   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:05.209468   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.213932   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:05.213994   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:05.256033   49198 cri.go:89] found id: ""
	I1024 20:16:05.256055   49198 logs.go:284] 0 containers: []
	W1024 20:16:05.256070   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:05.256077   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:05.256130   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:05.313137   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.313163   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:05.313171   49198 cri.go:89] found id: ""
	I1024 20:16:05.313181   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:05.313236   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.319603   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.324116   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:05.324138   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.364879   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:05.364905   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.430314   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:05.430342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:05.488524   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:05.488550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:05.547000   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:05.547029   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:05.561360   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:05.561392   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:05.616215   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:05.616254   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.666923   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:05.666955   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.707305   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:05.707332   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:05.865943   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:05.865972   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.914044   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:05.914070   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:06.370658   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:06.370692   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:06.423891   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:06.423919   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:10.098187   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:12.597089   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.796035   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.796300   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.805755   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.806246   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:08.967015   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:08.982371   49198 api_server.go:72] duration metric: took 4m12.675281905s to wait for apiserver process to appear ...
	I1024 20:16:08.982397   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:08.982431   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:08.982492   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:09.023557   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:09.023575   49198 cri.go:89] found id: ""
	I1024 20:16:09.023582   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:09.023626   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.029901   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:09.029954   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:09.066141   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.066169   49198 cri.go:89] found id: ""
	I1024 20:16:09.066181   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:09.066232   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.071099   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:09.071161   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:09.117898   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:09.117917   49198 cri.go:89] found id: ""
	I1024 20:16:09.117927   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:09.117979   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.122675   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:09.122729   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:09.162628   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.162647   49198 cri.go:89] found id: ""
	I1024 20:16:09.162656   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:09.162711   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.166799   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:09.166859   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:09.203866   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.203894   49198 cri.go:89] found id: ""
	I1024 20:16:09.203904   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:09.203968   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.208141   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:09.208201   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:09.252432   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.252449   49198 cri.go:89] found id: ""
	I1024 20:16:09.252457   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:09.252519   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.257709   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:09.257767   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:09.312883   49198 cri.go:89] found id: ""
	I1024 20:16:09.312908   49198 logs.go:284] 0 containers: []
	W1024 20:16:09.312919   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:09.312926   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:09.312984   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:09.365111   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:09.365138   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.365145   49198 cri.go:89] found id: ""
	I1024 20:16:09.365155   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:09.365215   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.370442   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.375055   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:09.375082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.440328   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:09.440361   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.489007   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:09.489035   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:09.539429   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:09.539467   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:09.591012   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:09.591049   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:09.608336   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:09.608362   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.656190   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:09.656216   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.704915   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:09.704942   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.743847   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:09.743878   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:10.154301   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:10.154342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:10.296525   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:10.296552   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:10.347731   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:10.347763   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:10.388130   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:10.388157   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:12.931381   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:16:12.938286   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:16:12.940208   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:12.940228   49198 api_server.go:131] duration metric: took 3.957823811s to wait for apiserver health ...
	I1024 20:16:12.940236   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:12.940255   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:12.940311   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:12.985630   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:12.985654   49198 cri.go:89] found id: ""
	I1024 20:16:12.985664   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:12.985736   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:12.991021   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:12.991094   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:13.031617   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:13.031638   49198 cri.go:89] found id: ""
	I1024 20:16:13.031647   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:13.031690   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.036956   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:13.037010   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:13.074663   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:13.074683   49198 cri.go:89] found id: ""
	I1024 20:16:13.074692   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:13.074745   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.079061   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:13.079115   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:13.122923   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.122947   49198 cri.go:89] found id: ""
	I1024 20:16:13.122957   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:13.123010   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.126914   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:13.126987   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:13.174746   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:13.174781   49198 cri.go:89] found id: ""
	I1024 20:16:13.174791   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:13.174867   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.179817   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:13.179884   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:13.228560   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.228588   49198 cri.go:89] found id: ""
	I1024 20:16:13.228606   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:13.228661   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.233182   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:13.233247   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:13.272072   49198 cri.go:89] found id: ""
	I1024 20:16:13.272100   49198 logs.go:284] 0 containers: []
	W1024 20:16:13.272110   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:13.272117   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:13.272174   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:13.317104   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:13.317129   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:13.317137   49198 cri.go:89] found id: ""
	I1024 20:16:13.317148   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:13.317208   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.327265   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.331706   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:13.331730   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.378259   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:13.378299   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:13.402257   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:13.402289   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:13.465655   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:13.465685   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.521268   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:13.521312   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:13.923501   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:13.923550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:13.976055   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:13.976082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:14.028953   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:14.028985   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:14.069859   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:14.069887   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:14.196920   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:14.196959   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:14.257588   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:14.257617   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:14.302980   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:14.303019   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:14.344441   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:14.344469   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:16.893365   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:16.893395   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.893404   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.893412   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.893419   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.893426   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.893433   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.893444   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.893456   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.893469   49198 system_pods.go:74] duration metric: took 3.953227014s to wait for pod list to return data ...
	I1024 20:16:16.893483   49198 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:16.895879   49198 default_sa.go:45] found service account: "default"
	I1024 20:16:16.895896   49198 default_sa.go:55] duration metric: took 2.405313ms for default service account to be created ...
	I1024 20:16:16.895903   49198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:16.902189   49198 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:16.902217   49198 system_pods.go:89] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.902225   49198 system_pods.go:89] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.902232   49198 system_pods.go:89] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.902240   49198 system_pods.go:89] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.902246   49198 system_pods.go:89] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.902253   49198 system_pods.go:89] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.902269   49198 system_pods.go:89] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.902281   49198 system_pods.go:89] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.902292   49198 system_pods.go:126] duration metric: took 6.383517ms to wait for k8s-apps to be running ...
	I1024 20:16:16.902303   49198 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:16.902359   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:16.920015   49198 system_svc.go:56] duration metric: took 17.706073ms WaitForService to wait for kubelet.
	I1024 20:16:16.920039   49198 kubeadm.go:581] duration metric: took 4m20.612955305s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:16.920063   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:16.924147   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:16.924167   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:16.924177   49198 node_conditions.go:105] duration metric: took 4.109839ms to run NodePressure ...
	I1024 20:16:16.924187   49198 start.go:228] waiting for startup goroutines ...
	I1024 20:16:16.924194   49198 start.go:233] waiting for cluster config update ...
	I1024 20:16:16.924206   49198 start.go:242] writing updated cluster config ...
	I1024 20:16:16.924490   49198 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:16.973588   49198 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:16.975639   49198 out.go:177] * Done! kubectl is now configured to use "embed-certs-867165" cluster and "default" namespace by default
	I1024 20:16:14.597646   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.598202   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.296652   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.795527   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.304610   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.305225   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.598694   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.099076   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.295897   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.804148   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:20.805158   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.598167   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.598533   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.598810   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.794690   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.796011   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.798006   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.803034   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:26.497612   49708 pod_ready.go:81] duration metric: took 4m0.000149915s waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:26.497657   49708 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:26.497666   49708 pod_ready.go:38] duration metric: took 4m3.599625321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:26.497682   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:26.497709   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:26.497757   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:26.569452   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:26.569479   49708 cri.go:89] found id: ""
	I1024 20:16:26.569489   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:26.569551   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.573824   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:26.573872   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:26.618910   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:26.618939   49708 cri.go:89] found id: ""
	I1024 20:16:26.618946   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:26.618998   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.623675   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:26.623723   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:26.671601   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:26.671621   49708 cri.go:89] found id: ""
	I1024 20:16:26.671628   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:26.671665   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.675997   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:26.676048   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:26.723100   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:26.723124   49708 cri.go:89] found id: ""
	I1024 20:16:26.723133   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:26.723187   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.727780   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:26.727837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:26.765584   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.765608   49708 cri.go:89] found id: ""
	I1024 20:16:26.765618   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:26.765663   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.770062   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:26.770121   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:26.811710   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:26.811728   49708 cri.go:89] found id: ""
	I1024 20:16:26.811736   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:26.811786   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.816125   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:26.816187   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:26.860427   49708 cri.go:89] found id: ""
	I1024 20:16:26.860452   49708 logs.go:284] 0 containers: []
	W1024 20:16:26.860462   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:26.860469   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:26.860532   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:26.905052   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:26.905083   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:26.905091   49708 cri.go:89] found id: ""
	I1024 20:16:26.905100   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:26.905154   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.909590   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.913618   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:26.913636   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.958127   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:26.958157   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:27.012523   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:27.012555   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:27.059311   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:27.059345   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:27.102879   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:27.102905   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:27.154377   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:27.154409   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:27.197488   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:27.197516   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:27.210530   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:27.210559   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:27.379195   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:27.379225   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:27.826087   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:27.826119   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:27.880305   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:27.880348   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:27.932382   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:27.932417   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:27.979060   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:27.979088   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:29.598843   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:31.598885   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.295090   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:32.295447   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.532134   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:30.547497   49708 api_server.go:72] duration metric: took 4m14.551629626s to wait for apiserver process to appear ...
	I1024 20:16:30.547522   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:30.547562   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:30.547627   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:30.588076   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.588097   49708 cri.go:89] found id: ""
	I1024 20:16:30.588104   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:30.588159   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.592397   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:30.592467   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:30.632362   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:30.632380   49708 cri.go:89] found id: ""
	I1024 20:16:30.632389   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:30.632446   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.636647   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:30.636695   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:30.676966   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:30.676997   49708 cri.go:89] found id: ""
	I1024 20:16:30.677005   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:30.677050   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.682153   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:30.682206   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:30.723427   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:30.723449   49708 cri.go:89] found id: ""
	I1024 20:16:30.723458   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:30.723516   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.727674   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:30.727740   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:30.774450   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:30.774473   49708 cri.go:89] found id: ""
	I1024 20:16:30.774482   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:30.774535   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.778753   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:30.778821   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:30.830068   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:30.830094   49708 cri.go:89] found id: ""
	I1024 20:16:30.830104   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:30.830169   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.835133   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:30.835201   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:30.885323   49708 cri.go:89] found id: ""
	I1024 20:16:30.885347   49708 logs.go:284] 0 containers: []
	W1024 20:16:30.885357   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:30.885363   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:30.885423   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:30.925415   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:30.925435   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:30.925440   49708 cri.go:89] found id: ""
	I1024 20:16:30.925447   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:30.925506   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.929723   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.933926   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:30.933965   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.999217   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:30.999250   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:31.051267   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:31.051300   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:31.107411   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:31.107444   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:31.233980   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:31.234009   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:31.275335   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:31.275362   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:31.329276   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:31.329316   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:31.380149   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:31.380184   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:31.393990   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:31.394016   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:31.440032   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:31.440065   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:31.478413   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:31.478445   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:31.529321   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:31.529349   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:31.578678   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:31.578708   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:33.603558   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.099473   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.295685   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.794759   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.514152   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:16:34.520578   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:16:34.522271   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:34.522289   49708 api_server.go:131] duration metric: took 3.974761353s to wait for apiserver health ...
	I1024 20:16:34.522297   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:34.522318   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:34.522363   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:34.568260   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:34.568280   49708 cri.go:89] found id: ""
	I1024 20:16:34.568287   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:34.568336   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.575356   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:34.575414   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:34.623358   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:34.623383   49708 cri.go:89] found id: ""
	I1024 20:16:34.623392   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:34.623449   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.628721   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:34.628777   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:34.675561   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:34.675583   49708 cri.go:89] found id: ""
	I1024 20:16:34.675592   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:34.675654   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.681613   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:34.681677   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:34.722858   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:34.722898   49708 cri.go:89] found id: ""
	I1024 20:16:34.722917   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:34.722974   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.727310   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:34.727376   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:34.768365   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:34.768383   49708 cri.go:89] found id: ""
	I1024 20:16:34.768390   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:34.768436   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.772776   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:34.772837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:34.825992   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:34.826020   49708 cri.go:89] found id: ""
	I1024 20:16:34.826030   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:34.826083   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.830957   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:34.831011   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:34.878138   49708 cri.go:89] found id: ""
	I1024 20:16:34.878167   49708 logs.go:284] 0 containers: []
	W1024 20:16:34.878175   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:34.878180   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:34.878235   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:34.929288   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.929321   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:34.929328   49708 cri.go:89] found id: ""
	I1024 20:16:34.929338   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:34.929391   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.933731   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.938300   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:34.938326   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.980919   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:34.980944   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:35.021465   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:35.021495   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:35.165907   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:35.165935   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:35.212733   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:35.212759   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:35.620344   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:35.620395   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:35.669555   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:35.669588   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:35.720959   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:35.720987   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:35.762823   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:35.762852   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:35.805994   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:35.806021   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:35.879019   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:35.879046   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:35.941760   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:35.941796   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:35.995475   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:35.995515   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:38.526080   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:38.526106   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.526114   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.526122   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.526128   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.526133   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.526139   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.526150   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.526159   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.526168   49708 system_pods.go:74] duration metric: took 4.003864797s to wait for pod list to return data ...
	I1024 20:16:38.526182   49708 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:38.528827   49708 default_sa.go:45] found service account: "default"
	I1024 20:16:38.528854   49708 default_sa.go:55] duration metric: took 2.662588ms for default service account to be created ...
	I1024 20:16:38.528863   49708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:38.534560   49708 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:38.534579   49708 system_pods.go:89] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.534585   49708 system_pods.go:89] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.534589   49708 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.534594   49708 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.534598   49708 system_pods.go:89] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.534602   49708 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.534610   49708 system_pods.go:89] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.534615   49708 system_pods.go:89] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.534622   49708 system_pods.go:126] duration metric: took 5.753846ms to wait for k8s-apps to be running ...
	I1024 20:16:38.534630   49708 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:38.534668   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:38.549835   49708 system_svc.go:56] duration metric: took 15.197069ms WaitForService to wait for kubelet.
	I1024 20:16:38.549856   49708 kubeadm.go:581] duration metric: took 4m22.553994431s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:38.549878   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:38.553043   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:38.553065   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:38.553076   49708 node_conditions.go:105] duration metric: took 3.193057ms to run NodePressure ...
	I1024 20:16:38.553086   49708 start.go:228] waiting for startup goroutines ...
	I1024 20:16:38.553091   49708 start.go:233] waiting for cluster config update ...
	I1024 20:16:38.553100   49708 start.go:242] writing updated cluster config ...
	I1024 20:16:38.553348   49708 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:38.601183   49708 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:38.603463   49708 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643126" cluster and "default" namespace by default
	I1024 20:16:38.597848   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:40.599437   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:38.795772   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:41.293845   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.096749   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.097165   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:47.097443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.298644   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.797003   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:49.097716   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:51.597754   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:48.295110   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:50.796361   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.600174   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:56.097860   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.295856   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:55.295890   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:57.795597   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:58.097890   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:00.598554   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:59.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:02.295268   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:03.098362   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:05.596632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:04.296575   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:06.296820   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.098450   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:10.597828   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:12.599199   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.795717   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:11.296662   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.097014   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.097844   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:13.794373   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.795134   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.795531   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.098039   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:21.098582   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.796588   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:22.296536   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:23.597792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.098066   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:24.795501   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.796240   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:27.488206   49071 pod_ready.go:81] duration metric: took 4m0.000518995s waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:27.488255   49071 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:27.488267   49071 pod_ready.go:38] duration metric: took 4m4.400905907s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:27.488288   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:17:27.488320   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:27.488379   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:27.544995   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:27.545022   49071 cri.go:89] found id: ""
	I1024 20:17:27.545033   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:27.545116   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.550068   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:27.550127   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:27.595184   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:27.595207   49071 cri.go:89] found id: ""
	I1024 20:17:27.595215   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:27.595265   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.600016   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:27.600075   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:27.644222   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:27.644254   49071 cri.go:89] found id: ""
	I1024 20:17:27.644265   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:27.644321   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.654982   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:27.655048   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:27.697751   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:27.697773   49071 cri.go:89] found id: ""
	I1024 20:17:27.697783   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:27.697838   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.701909   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:27.701969   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:27.746060   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:27.746085   49071 cri.go:89] found id: ""
	I1024 20:17:27.746094   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:27.746147   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.750335   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:27.750392   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:27.791948   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:27.791973   49071 cri.go:89] found id: ""
	I1024 20:17:27.791981   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:27.792045   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.796535   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:27.796616   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:27.839648   49071 cri.go:89] found id: ""
	I1024 20:17:27.839675   49071 logs.go:284] 0 containers: []
	W1024 20:17:27.839683   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:27.839689   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:27.839750   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:27.889284   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.889327   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:27.889334   49071 cri.go:89] found id: ""
	I1024 20:17:27.889343   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:27.889404   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.893661   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.897791   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:27.897819   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.941335   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:27.941369   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:27.954378   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:27.954409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:28.115760   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:28.115792   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:28.171378   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:28.171409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:28.211591   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:28.211620   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:28.247491   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247676   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247811   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247961   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:28.268681   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:28.268717   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:28.099979   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:28.791972   50077 pod_ready.go:81] duration metric: took 4m0.000695315s waiting for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:28.792005   50077 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:28.792032   50077 pod_ready.go:38] duration metric: took 4m1.199949971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:28.792069   50077 kubeadm.go:640] restartCluster took 5m7.653001653s
	W1024 20:17:28.792133   50077 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 20:17:28.792173   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1024 20:17:28.321382   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:28.321413   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:28.364236   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:28.364260   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:28.840985   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:28.841028   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:28.896806   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:28.896846   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:28.948487   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:28.948520   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:28.993469   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:28.993500   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:29.052064   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052102   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:29.052154   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:29.052165   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052174   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052180   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052186   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:29.052191   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052196   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.598790   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.806587354s)
	I1024 20:17:33.598873   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:17:33.614594   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:17:33.625146   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:17:33.635420   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:17:33.635486   50077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 20:17:33.858680   50077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:17:39.053169   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:17:39.069883   49071 api_server.go:72] duration metric: took 4m23.373979574s to wait for apiserver process to appear ...
	I1024 20:17:39.069910   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:17:39.069953   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:39.070015   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:39.116676   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:39.116696   49071 cri.go:89] found id: ""
	I1024 20:17:39.116703   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:39.116752   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.121745   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:39.121814   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:39.174897   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.174932   49071 cri.go:89] found id: ""
	I1024 20:17:39.174943   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:39.175002   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.180933   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:39.181003   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:39.239666   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.239691   49071 cri.go:89] found id: ""
	I1024 20:17:39.239701   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:39.239754   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.244270   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:39.244328   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:39.285405   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.285432   49071 cri.go:89] found id: ""
	I1024 20:17:39.285443   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:39.285503   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.290326   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:39.290393   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:39.330723   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:39.330751   49071 cri.go:89] found id: ""
	I1024 20:17:39.330761   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:39.330816   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.335850   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:39.335917   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:39.375354   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:39.375377   49071 cri.go:89] found id: ""
	I1024 20:17:39.375387   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:39.375449   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.380243   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:39.380313   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:39.424841   49071 cri.go:89] found id: ""
	I1024 20:17:39.424875   49071 logs.go:284] 0 containers: []
	W1024 20:17:39.424885   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:39.424892   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:39.424950   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:39.464134   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.464153   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:39.464160   49071 cri.go:89] found id: ""
	I1024 20:17:39.464168   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:39.464224   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.468810   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.473093   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:39.473128   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:39.507113   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507292   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507432   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507588   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:39.530433   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:39.530479   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:39.666739   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:39.666765   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.710505   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:39.710538   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.749917   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:39.749946   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.799168   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:39.799196   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.846346   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:39.846377   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:40.273032   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:40.273065   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:40.320491   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:40.320521   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:40.378356   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:40.378395   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:40.421618   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:40.421647   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:40.466303   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:40.466334   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:40.478941   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:40.478966   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:40.544618   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544642   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:40.544694   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:40.544706   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544718   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544725   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544733   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:40.544739   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544747   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:46.481686   50077 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1024 20:17:46.481762   50077 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 20:17:46.481861   50077 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:17:46.482000   50077 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:17:46.482104   50077 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:17:46.482236   50077 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:17:46.482362   50077 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:17:46.482486   50077 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1024 20:17:46.482538   50077 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:17:46.484150   50077 out.go:204]   - Generating certificates and keys ...
	I1024 20:17:46.484246   50077 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 20:17:46.484315   50077 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 20:17:46.484402   50077 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 20:17:46.484509   50077 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 20:17:46.484603   50077 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 20:17:46.484689   50077 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 20:17:46.484778   50077 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 20:17:46.484870   50077 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 20:17:46.484972   50077 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 20:17:46.485069   50077 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 20:17:46.485123   50077 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 20:17:46.485200   50077 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:17:46.485263   50077 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:17:46.485343   50077 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:17:46.485430   50077 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:17:46.485503   50077 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:17:46.485590   50077 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:17:46.487065   50077 out.go:204]   - Booting up control plane ...
	I1024 20:17:46.487158   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:17:46.487219   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:17:46.487291   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:17:46.487401   50077 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:17:46.487551   50077 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:17:46.487623   50077 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003664 seconds
	I1024 20:17:46.487756   50077 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:17:46.487882   50077 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:17:46.487940   50077 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:17:46.488123   50077 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-467375 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 20:17:46.488199   50077 kubeadm.go:322] [bootstrap-token] Using token: axp9sy.xsem3c8nzt72b18p
	I1024 20:17:46.490507   50077 out.go:204]   - Configuring RBAC rules ...
	I1024 20:17:46.490603   50077 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:17:46.490719   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:17:46.490832   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:17:46.490938   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:17:46.491009   50077 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:17:46.491044   50077 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 20:17:46.491083   50077 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 20:17:46.491091   50077 kubeadm.go:322] 
	I1024 20:17:46.491151   50077 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 20:17:46.491163   50077 kubeadm.go:322] 
	I1024 20:17:46.491224   50077 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 20:17:46.491231   50077 kubeadm.go:322] 
	I1024 20:17:46.491260   50077 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 20:17:46.491346   50077 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:17:46.491409   50077 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:17:46.491419   50077 kubeadm.go:322] 
	I1024 20:17:46.491511   50077 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 20:17:46.491621   50077 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:17:46.491715   50077 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:17:46.491725   50077 kubeadm.go:322] 
	I1024 20:17:46.491829   50077 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1024 20:17:46.491929   50077 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 20:17:46.491937   50077 kubeadm.go:322] 
	I1024 20:17:46.492064   50077 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492249   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 20:17:46.492292   50077 kubeadm.go:322]     --control-plane 	  
	I1024 20:17:46.492302   50077 kubeadm.go:322] 
	I1024 20:17:46.492423   50077 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:17:46.492435   50077 kubeadm.go:322] 
	I1024 20:17:46.492532   50077 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492675   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 20:17:46.492686   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:17:46.492694   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:17:46.494152   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:17:46.495677   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:17:46.510639   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:17:46.539872   50077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:17:46.539933   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.539945   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=old-k8s-version-467375 minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.984338   50077 ops.go:34] apiserver oom_adj: -16
	I1024 20:17:46.984391   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.163022   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.798557   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.298499   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.798506   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.298076   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.798120   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.298504   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.298777   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.798477   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.298309   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.798243   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.546645   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:17:50.552245   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:17:50.553721   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:17:50.553747   49071 api_server.go:131] duration metric: took 11.483829454s to wait for apiserver health ...
	I1024 20:17:50.553757   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:17:50.553784   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:50.553844   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:50.594504   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:50.594528   49071 cri.go:89] found id: ""
	I1024 20:17:50.594536   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:50.594586   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.598912   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:50.598963   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:50.644339   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:50.644355   49071 cri.go:89] found id: ""
	I1024 20:17:50.644362   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:50.644406   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.649046   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:50.649099   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:50.688245   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:50.688268   49071 cri.go:89] found id: ""
	I1024 20:17:50.688278   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:50.688330   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.692382   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:50.692429   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:50.736359   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:50.736384   49071 cri.go:89] found id: ""
	I1024 20:17:50.736393   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:50.736451   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.741226   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:50.741287   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:50.797894   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:50.797920   49071 cri.go:89] found id: ""
	I1024 20:17:50.797930   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:50.797997   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.802725   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:50.802781   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:50.851081   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:50.851106   49071 cri.go:89] found id: ""
	I1024 20:17:50.851115   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:50.851166   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.855549   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:50.855600   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:50.909237   49071 cri.go:89] found id: ""
	I1024 20:17:50.909265   49071 logs.go:284] 0 containers: []
	W1024 20:17:50.909276   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:50.909283   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:50.909355   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:50.958541   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:50.958567   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:50.958574   49071 cri.go:89] found id: ""
	I1024 20:17:50.958583   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:50.958638   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.962947   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.967261   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:50.967283   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:51.087158   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:51.087190   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:51.144421   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:51.144458   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:51.200040   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:51.200072   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:51.255703   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:51.255740   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:51.683831   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:51.683869   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:51.726821   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:51.726856   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:51.776977   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:51.777006   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:51.822826   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:51.822861   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:51.873557   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.873838   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874063   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874313   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:51.900648   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:51.900690   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:51.916123   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:51.916161   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:51.960440   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:51.960470   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:52.010020   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:52.010051   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:52.051039   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051063   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:52.051113   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:52.051127   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051142   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051162   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051173   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:52.051183   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051190   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:53.298168   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:53.798546   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.298175   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.798534   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.298510   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.798562   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.297914   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.797930   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.298527   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.298630   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.798550   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.298526   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.798537   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.298538   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.798072   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:01.014502   50077 kubeadm.go:1081] duration metric: took 14.474620601s to wait for elevateKubeSystemPrivileges.
	I1024 20:18:01.014547   50077 kubeadm.go:406] StartCluster complete in 5m39.9402605s
	I1024 20:18:01.014569   50077 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.014667   50077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:18:01.017210   50077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.017539   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:18:01.017574   50077 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:18:01.017659   50077 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017666   50077 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017677   50077 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-467375"
	W1024 20:18:01.017690   50077 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:18:01.017695   50077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-467375"
	I1024 20:18:01.017699   50077 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017718   50077 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-467375"
	W1024 20:18:01.017727   50077 addons.go:240] addon metrics-server should already be in state true
	I1024 20:18:01.017731   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017777   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017816   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:18:01.018053   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018088   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018111   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018122   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018149   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018257   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.036179   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1024 20:18:01.036834   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.037477   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.037504   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.037665   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I1024 20:18:01.037824   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I1024 20:18:01.037912   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.038074   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.038220   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038306   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038850   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.038867   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039010   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.039021   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039391   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039410   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039925   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.039949   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.039974   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.040014   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.041243   50077 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-467375"
	W1024 20:18:01.041258   50077 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:18:01.041277   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.041611   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.041645   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.056254   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1024 20:18:01.056888   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.057215   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I1024 20:18:01.057487   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.057502   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.057895   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.057956   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.058536   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.058574   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.058844   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.058857   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.058929   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I1024 20:18:01.059172   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.059288   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.059451   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.059964   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.059975   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.060353   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.060565   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.061107   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.062802   50077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:18:01.064189   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:18:01.064209   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:18:01.064230   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.062154   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.066082   50077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:18:01.067046   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.067880   50077 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.067901   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:18:01.067921   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.068400   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.068432   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.069073   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.069343   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.069484   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.069587   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.071678   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072196   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.072220   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072596   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.072776   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.072905   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.073043   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.079576   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I1024 20:18:01.080025   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.080592   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.080613   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.081035   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.081240   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.083090   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.083404   50077 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.083425   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:18:01.083443   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.086433   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.086802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.086824   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.087003   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.087198   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.087348   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.087506   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.197205   50077 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-467375" context rescaled to 1 replicas
	I1024 20:18:01.197249   50077 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:18:01.200328   50077 out.go:177] * Verifying Kubernetes components...
	I1024 20:18:02.061971   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:18:02.062015   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.062024   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.062031   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.062040   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.062047   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.062054   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.062066   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.062078   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.062086   49071 system_pods.go:74] duration metric: took 11.508322005s to wait for pod list to return data ...
	I1024 20:18:02.062098   49071 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:02.065560   49071 default_sa.go:45] found service account: "default"
	I1024 20:18:02.065585   49071 default_sa.go:55] duration metric: took 3.476366ms for default service account to be created ...
	I1024 20:18:02.065595   49071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:02.073224   49071 system_pods.go:86] 8 kube-system pods found
	I1024 20:18:02.073253   49071 system_pods.go:89] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.073262   49071 system_pods.go:89] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.073269   49071 system_pods.go:89] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.073277   49071 system_pods.go:89] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.073284   49071 system_pods.go:89] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.073290   49071 system_pods.go:89] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.073313   49071 system_pods.go:89] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.073326   49071 system_pods.go:89] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.073335   49071 system_pods.go:126] duration metric: took 7.733883ms to wait for k8s-apps to be running ...
	I1024 20:18:02.073346   49071 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:18:02.073405   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:02.093085   49071 system_svc.go:56] duration metric: took 19.727658ms WaitForService to wait for kubelet.
	I1024 20:18:02.093113   49071 kubeadm.go:581] duration metric: took 4m46.397215509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:18:02.093135   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:18:02.101982   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:18:02.102007   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:18:02.102018   49071 node_conditions.go:105] duration metric: took 8.878046ms to run NodePressure ...
	I1024 20:18:02.102035   49071 start.go:228] waiting for startup goroutines ...
	I1024 20:18:02.102041   49071 start.go:233] waiting for cluster config update ...
	I1024 20:18:02.102054   49071 start.go:242] writing updated cluster config ...
	I1024 20:18:02.102767   49071 ssh_runner.go:195] Run: rm -f paused
	I1024 20:18:02.159693   49071 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:18:02.161831   49071 out.go:177] * Done! kubectl is now configured to use "no-preload-014826" cluster and "default" namespace by default
	I1024 20:18:01.201778   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:01.315241   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.335753   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.339160   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:18:01.339182   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:18:01.376704   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:18:01.376726   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:18:01.385150   50077 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.385223   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 20:18:01.443957   50077 node_ready.go:49] node "old-k8s-version-467375" has status "Ready":"True"
	I1024 20:18:01.443978   50077 node_ready.go:38] duration metric: took 58.799937ms waiting for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.443987   50077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:01.453968   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:01.453998   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:18:01.481599   50077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:01.543065   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:02.715998   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.400725332s)
	I1024 20:18:02.716049   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716062   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716066   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.38027937s)
	I1024 20:18:02.716103   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716120   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716152   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.330913087s)
	I1024 20:18:02.716170   50077 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 20:18:02.716377   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716392   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716402   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716410   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716512   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.716522   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716536   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716547   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716557   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716623   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716637   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.717532   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.717534   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.717554   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.790444   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.790480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.790901   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.790925   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895176   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.352065096s)
	I1024 20:18:02.895243   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895268   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895611   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895630   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895634   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.895639   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895875   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895888   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895905   50077 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-467375"
	I1024 20:18:02.897664   50077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:18:02.899508   50077 addons.go:502] enable addons completed in 1.881940564s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:18:03.719917   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:06.207388   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:08.207967   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:10.708258   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:12.208133   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.208155   50077 pod_ready.go:81] duration metric: took 10.726531733s waiting for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.208166   50077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213213   50077 pod_ready.go:92] pod "kube-proxy-9bpht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.213237   50077 pod_ready.go:81] duration metric: took 5.063943ms waiting for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213247   50077 pod_ready.go:38] duration metric: took 10.769249135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:12.213267   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:18:12.213344   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:18:12.228272   50077 api_server.go:72] duration metric: took 11.030986098s to wait for apiserver process to appear ...
	I1024 20:18:12.228295   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:18:12.228313   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:18:12.234663   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:18:12.235584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:18:12.235599   50077 api_server.go:131] duration metric: took 7.297294ms to wait for apiserver health ...
	I1024 20:18:12.235605   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:18:12.239203   50077 system_pods.go:59] 4 kube-system pods found
	I1024 20:18:12.239228   50077 system_pods.go:61] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.239235   50077 system_pods.go:61] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.239246   50077 system_pods.go:61] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.239292   50077 system_pods.go:61] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.239307   50077 system_pods.go:74] duration metric: took 3.696523ms to wait for pod list to return data ...
	I1024 20:18:12.239315   50077 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:12.242065   50077 default_sa.go:45] found service account: "default"
	I1024 20:18:12.242080   50077 default_sa.go:55] duration metric: took 2.760528ms for default service account to be created ...
	I1024 20:18:12.242086   50077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:12.245602   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.245624   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.245631   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.245640   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.245648   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.245664   50077 retry.go:31] will retry after 287.935783ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.538837   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.538900   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.538924   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.538942   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.538955   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.538979   50077 retry.go:31] will retry after 320.680304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.864800   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.864826   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.864832   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.864838   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.864844   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.864858   50077 retry.go:31] will retry after 364.04425ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.233903   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.233927   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.233934   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.233941   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.233946   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.233974   50077 retry.go:31] will retry after 559.821457ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.799208   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.799234   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.799240   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.799246   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.799252   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.799266   50077 retry.go:31] will retry after 522.263157ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.325735   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.325767   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.325776   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.325789   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.325799   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.325817   50077 retry.go:31] will retry after 668.137602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.999589   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.999614   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.999620   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.999626   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.999632   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.999646   50077 retry.go:31] will retry after 859.983274ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:15.865531   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:15.865556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:15.865561   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:15.865568   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:15.865573   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:15.865589   50077 retry.go:31] will retry after 1.238765858s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:17.109999   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:17.110023   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:17.110028   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:17.110035   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:17.110041   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:17.110054   50077 retry.go:31] will retry after 1.485428629s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:18.600612   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:18.600635   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:18.600641   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:18.600647   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:18.600652   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:18.600665   50077 retry.go:31] will retry after 2.290652681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:20.897529   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:20.897556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:20.897562   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:20.897571   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:20.897577   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:20.897593   50077 retry.go:31] will retry after 2.367552906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:23.270766   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:23.270792   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:23.270800   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:23.270810   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:23.270817   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:23.270834   50077 retry.go:31] will retry after 2.861357376s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:26.136663   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:26.136696   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:26.136704   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:26.136715   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:26.136725   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:26.136743   50077 retry.go:31] will retry after 3.526737387s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:29.670148   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:29.670175   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:29.670181   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:29.670188   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:29.670195   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:29.670215   50077 retry.go:31] will retry after 5.450931485s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:35.125964   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:35.125989   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:35.125994   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:35.126001   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:35.126007   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:35.126022   50077 retry.go:31] will retry after 5.914408322s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:41.046649   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:41.046670   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:41.046677   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:41.046684   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:41.046690   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:41.046704   50077 retry.go:31] will retry after 6.748980526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:47.802189   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:47.802212   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:47.802217   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:47.802225   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:47.802230   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:47.802244   50077 retry.go:31] will retry after 8.662562452s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:56.471025   50077 system_pods.go:86] 7 kube-system pods found
	I1024 20:18:56.471062   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:56.471071   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:18:56.471079   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:18:56.471086   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:56.471094   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Pending
	I1024 20:18:56.471108   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:56.471121   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:56.471142   50077 retry.go:31] will retry after 10.356793998s: missing components: etcd, kube-scheduler
	I1024 20:19:06.834711   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:06.834741   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:06.834749   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Pending
	I1024 20:19:06.834755   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:06.834762   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:06.834767   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:06.834772   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:06.834782   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:06.834792   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:06.834809   50077 retry.go:31] will retry after 14.609583217s: missing components: etcd
	I1024 20:19:21.450651   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:21.450678   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:21.450685   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Running
	I1024 20:19:21.450689   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:21.450693   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:21.450699   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:21.450709   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:21.450719   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:21.450732   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:21.450745   50077 system_pods.go:126] duration metric: took 1m9.20865321s to wait for k8s-apps to be running ...
	I1024 20:19:21.450757   50077 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:19:21.450800   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:19:21.468030   50077 system_svc.go:56] duration metric: took 17.254248ms WaitForService to wait for kubelet.
	I1024 20:19:21.468061   50077 kubeadm.go:581] duration metric: took 1m20.270780436s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:19:21.468089   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:19:21.471958   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:19:21.471982   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:19:21.471993   50077 node_conditions.go:105] duration metric: took 3.898893ms to run NodePressure ...
	I1024 20:19:21.472003   50077 start.go:228] waiting for startup goroutines ...
	I1024 20:19:21.472008   50077 start.go:233] waiting for cluster config update ...
	I1024 20:19:21.472018   50077 start.go:242] writing updated cluster config ...
	I1024 20:19:21.472257   50077 ssh_runner.go:195] Run: rm -f paused
	I1024 20:19:21.520082   50077 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1024 20:19:21.522545   50077 out.go:177] 
	W1024 20:19:21.524125   50077 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1024 20:19:21.525515   50077 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1024 20:19:21.527113   50077 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-467375" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:11:58 UTC, ends at Tue 2023-10-24 20:28:23 UTC. --
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.197155027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179303197110572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=d3a313fe-8475-4416-a28c-6f94b1d9a7f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.197894208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9f762b5e-9ffe-4092-8241-0d22d017927f name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.197966726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9f762b5e-9ffe-4092-8241-0d22d017927f name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.198167921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9f762b5e-9ffe-4092-8241-0d22d017927f name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.242481718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=facdfc89-d9b8-472f-a0d7-f4ab5cb9c989 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.242691042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=facdfc89-d9b8-472f-a0d7-f4ab5cb9c989 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.245190860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8e204cb8-b850-4cb0-bc30-99294338d06d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.245788327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179303245774275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8e204cb8-b850-4cb0-bc30-99294338d06d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.246439608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22d04c03-b315-4c64-bee8-b363b04eedfe name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.246636394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22d04c03-b315-4c64-bee8-b363b04eedfe name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.246850104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22d04c03-b315-4c64-bee8-b363b04eedfe name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.289055754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=acf3dcb7-c701-44cc-944c-37295746b052 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.289126968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=acf3dcb7-c701-44cc-944c-37295746b052 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.290386040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6f720ff5-37b3-45a3-86f0-77748f8ba6fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.290849188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179303290835789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6f720ff5-37b3-45a3-86f0-77748f8ba6fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.291466656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=55a59095-01bf-4224-8d07-624ca6b602ec name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.291592705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=55a59095-01bf-4224-8d07-624ca6b602ec name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.291764938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=55a59095-01bf-4224-8d07-624ca6b602ec name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.328069145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=36172567-72b9-4f9b-9fdd-8a1d28e709c0 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.328152905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=36172567-72b9-4f9b-9fdd-8a1d28e709c0 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.329716955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c46a4778-a026-4255-86a5-ae8dfe60ab47 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.330228183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179303330214869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c46a4778-a026-4255-86a5-ae8dfe60ab47 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.331007144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6cd90e7-5366-4d9e-a8bf-37054de85da1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.331086024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6cd90e7-5366-4d9e-a8bf-37054de85da1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:28:23 old-k8s-version-467375 crio[713]: time="2023-10-24 20:28:23.331250853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6cd90e7-5366-4d9e-a8bf-37054de85da1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15cfea4cc862a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   0dd3c0060f763       storage-provisioner
	2d460706afb1a       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   8fb53b434c655       kube-proxy-9bpht
	f3befe9d41186       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   6f437dfde8ea0       coredns-5644d7b6d9-nbmqt
	850ec0f2b7ba0       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   306e6f6f7cd34       etcd-old-k8s-version-467375
	53d38854e1b72       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   b90660ec9923f       kube-scheduler-old-k8s-version-467375
	3bf056c13d767       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   a6549fc51cf2b       kube-controller-manager-old-k8s-version-467375
	6116ab191670e       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   68b686c2126cb       kube-apiserver-old-k8s-version-467375
	
	* 
	* ==> coredns [f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167] <==
	* .:53
	2023-10-24T20:18:02.676Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-24T20:18:02.676Z [INFO] CoreDNS-1.6.2
	2023-10-24T20:18:02.676Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-24T20:18:29.106Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-467375
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-467375
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=old-k8s-version-467375
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:17:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:27:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:27:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:27:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:27:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    old-k8s-version-467375
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 cf177f680a9a4008b36f2fe5fe7a9338
	 System UUID:                cf177f68-0a9a-4008-b36f-2fe5fe7a9338
	 Boot ID:                    1c9add44-c102-4a36-9938-ce862bd11598
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-nbmqt                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-467375                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-apiserver-old-k8s-version-467375             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                kube-controller-manager-old-k8s-version-467375    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                kube-proxy-9bpht                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-467375             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                metrics-server-74d5856cc6-b5qcv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 10m                kubelet, old-k8s-version-467375     Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet, old-k8s-version-467375     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-467375     Node old-k8s-version-467375 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-467375     Node old-k8s-version-467375 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-467375     Node old-k8s-version-467375 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-467375  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct24 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072882] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.641488] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.471187] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141222] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.508089] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct24 20:12] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.151952] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.165011] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.145338] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.261366] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +20.463140] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.479303] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.134895] kauditd_printk_skb: 13 callbacks suppressed
	[Oct24 20:13] kauditd_printk_skb: 4 callbacks suppressed
	[Oct24 20:17] systemd-fstab-generator[3168]: Ignoring "noauto" for root device
	[  +0.736063] kauditd_printk_skb: 8 callbacks suppressed
	[Oct24 20:18] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a] <==
	* 2023-10-24 20:17:37.646887 I | raft: newRaft 226d7ac4e2309206 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-24 20:17:37.646912 I | raft: 226d7ac4e2309206 became follower at term 1
	2023-10-24 20:17:37.654879 W | auth: simple token is not cryptographically signed
	2023-10-24 20:17:37.661258 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-24 20:17:37.663183 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 20:17:37.663426 I | embed: listening for metrics on http://192.168.39.71:2381
	2023-10-24 20:17:37.664137 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-24 20:17:37.665162 I | etcdserver/membership: added member 226d7ac4e2309206 [https://192.168.39.71:2380] to cluster 98fbf1e9ed6d9a6e
	2023-10-24 20:17:37.665321 I | etcdserver: 226d7ac4e2309206 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-24 20:17:38.047436 I | raft: 226d7ac4e2309206 is starting a new election at term 1
	2023-10-24 20:17:38.047628 I | raft: 226d7ac4e2309206 became candidate at term 2
	2023-10-24 20:17:38.047731 I | raft: 226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 2
	2023-10-24 20:17:38.047762 I | raft: 226d7ac4e2309206 became leader at term 2
	2023-10-24 20:17:38.047779 I | raft: raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 2
	2023-10-24 20:17:38.048110 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-24 20:17:38.049466 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-24 20:17:38.049590 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-24 20:17:38.049618 I | etcdserver: published {Name:old-k8s-version-467375 ClientURLs:[https://192.168.39.71:2379]} to cluster 98fbf1e9ed6d9a6e
	2023-10-24 20:17:38.049850 I | embed: ready to serve client requests
	2023-10-24 20:17:38.049935 I | embed: ready to serve client requests
	2023-10-24 20:17:38.051342 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-24 20:17:38.053123 I | embed: serving client requests on 192.168.39.71:2379
	2023-10-24 20:18:02.422690 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (100.444844ms) to execute
	2023-10-24 20:27:38.072143 I | mvcc: store.index: compact 660
	2023-10-24 20:27:38.074442 I | mvcc: finished scheduled compaction at 660 (took 1.821789ms)
	
	* 
	* ==> kernel <==
	*  20:28:23 up 16 min,  0 users,  load average: 0.03, 0.18, 0.20
	Linux old-k8s-version-467375 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8] <==
	* I1024 20:21:04.353398       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:21:04.353615       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:21:04.353712       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:21:04.353737       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:22:42.465345       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:22:42.465453       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:22:42.465568       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:22:42.465577       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:23:42.465817       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:23:42.466064       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:23:42.466124       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:23:42.466150       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:25:42.466780       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:25:42.466938       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:25:42.467017       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:25:42.467025       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:27:42.468014       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:27:42.468149       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:27:42.468252       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:27:42.468262       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087] <==
	* E1024 20:22:03.410236       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:22:17.412041       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:22:33.662343       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:22:49.413926       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:23:03.914814       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:23:21.416398       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:23:34.166927       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:23:53.418939       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:24:04.418893       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:24:25.420883       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:24:34.671297       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:24:57.422881       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:25:04.923320       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:25:29.424898       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:25:35.175390       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:26:01.427390       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:26:05.428339       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:26:33.429927       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:26:35.680484       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:27:05.432699       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:27:05.938681       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1024 20:27:36.190838       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:27:37.434845       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:28:06.443052       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:28:09.437027       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87] <==
	* W1024 20:18:03.692313       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1024 20:18:03.714817       1 node.go:135] Successfully retrieved node IP: 192.168.39.71
	I1024 20:18:03.714904       1 server_others.go:149] Using iptables Proxier.
	I1024 20:18:03.716451       1 server.go:529] Version: v1.16.0
	I1024 20:18:03.720208       1 config.go:313] Starting service config controller
	I1024 20:18:03.720270       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1024 20:18:03.720303       1 config.go:131] Starting endpoints config controller
	I1024 20:18:03.720409       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1024 20:18:03.820721       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1024 20:18:03.821019       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582] <==
	* W1024 20:17:41.458577       1 authentication.go:79] Authentication is disabled
	I1024 20:17:41.458656       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1024 20:17:41.459207       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1024 20:17:41.503456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 20:17:41.503698       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 20:17:41.503797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 20:17:41.503881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 20:17:41.503957       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:41.516470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 20:17:41.544943       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 20:17:41.545130       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:41.558105       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 20:17:41.558407       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 20:17:41.558730       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 20:17:42.506433       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 20:17:42.511970       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 20:17:42.538247       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 20:17:42.553234       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 20:17:42.553343       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:42.553925       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 20:17:42.554155       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 20:17:42.559312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:42.559596       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 20:17:42.560260       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 20:17:42.562913       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:11:58 UTC, ends at Tue 2023-10-24 20:28:23 UTC. --
	Oct 24 20:23:46 old-k8s-version-467375 kubelet[3174]: E1024 20:23:46.824669    3174 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:23:46 old-k8s-version-467375 kubelet[3174]: E1024 20:23:46.824776    3174 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:23:46 old-k8s-version-467375 kubelet[3174]: E1024 20:23:46.824842    3174 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:23:46 old-k8s-version-467375 kubelet[3174]: E1024 20:23:46.824878    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 24 20:24:00 old-k8s-version-467375 kubelet[3174]: E1024 20:24:00.808696    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:24:13 old-k8s-version-467375 kubelet[3174]: E1024 20:24:13.806434    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:24:27 old-k8s-version-467375 kubelet[3174]: E1024 20:24:27.805885    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:24:38 old-k8s-version-467375 kubelet[3174]: E1024 20:24:38.806463    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:24:51 old-k8s-version-467375 kubelet[3174]: E1024 20:24:51.805697    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:25:02 old-k8s-version-467375 kubelet[3174]: E1024 20:25:02.806236    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:25:16 old-k8s-version-467375 kubelet[3174]: E1024 20:25:16.806593    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:25:31 old-k8s-version-467375 kubelet[3174]: E1024 20:25:31.806137    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:25:45 old-k8s-version-467375 kubelet[3174]: E1024 20:25:45.805591    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:25:59 old-k8s-version-467375 kubelet[3174]: E1024 20:25:59.806159    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:26:13 old-k8s-version-467375 kubelet[3174]: E1024 20:26:13.806017    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:26:26 old-k8s-version-467375 kubelet[3174]: E1024 20:26:26.806249    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:26:40 old-k8s-version-467375 kubelet[3174]: E1024 20:26:40.806357    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:26:55 old-k8s-version-467375 kubelet[3174]: E1024 20:26:55.806159    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:09 old-k8s-version-467375 kubelet[3174]: E1024 20:27:09.806706    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:23 old-k8s-version-467375 kubelet[3174]: E1024 20:27:23.806193    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:34 old-k8s-version-467375 kubelet[3174]: E1024 20:27:34.806634    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:34 old-k8s-version-467375 kubelet[3174]: E1024 20:27:34.907272    3174 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 24 20:27:48 old-k8s-version-467375 kubelet[3174]: E1024 20:27:48.806401    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:01 old-k8s-version-467375 kubelet[3174]: E1024 20:28:01.806469    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:13 old-k8s-version-467375 kubelet[3174]: E1024 20:28:13.806155    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f] <==
	* I1024 20:18:03.712749       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:18:03.734431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:18:03.734586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:18:03.742147       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:18:03.744014       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24fd7826-13bb-4292-aeda-a867c165a3ad", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-467375_0c370f91-25ec-4144-b971-8091d45e365c became leader
	I1024 20:18:03.744082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-467375_0c370f91-25ec-4144-b971-8091d45e365c!
	I1024 20:18:03.844586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-467375_0c370f91-25ec-4144-b971-8091d45e365c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467375 -n old-k8s-version-467375
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-467375 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-b5qcv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-467375 describe pod metrics-server-74d5856cc6-b5qcv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-467375 describe pod metrics-server-74d5856cc6-b5qcv: exit status 1 (72.075948ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-b5qcv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-467375 describe pod metrics-server-74d5856cc6-b5qcv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (363.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-867165 -n embed-certs-867165
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:31:21.918630938 +0000 UTC m=+5443.684109938
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-867165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-867165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.53µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-867165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-867165 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-867165 logs -n 25: (1.290761749s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:30 UTC | 24 Oct 23 20:30 UTC |
	| start   | -p newest-cni-398707 --memory=2200 --alsologtostderr   | newest-cni-398707            | jenkins | v1.31.2 | 24 Oct 23 20:30 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:31 UTC | 24 Oct 23 20:31 UTC |
	| start   | -p auto-784554 --memory=3072                           | auto-784554                  | jenkins | v1.31.2 | 24 Oct 23 20:31 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:31:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:31:10.945358   55395 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:31:10.945491   55395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:31:10.945504   55395 out.go:309] Setting ErrFile to fd 2...
	I1024 20:31:10.945511   55395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:31:10.945685   55395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:31:10.946269   55395 out.go:303] Setting JSON to false
	I1024 20:31:10.947270   55395 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7769,"bootTime":1698171702,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:31:10.947335   55395 start.go:138] virtualization: kvm guest
	I1024 20:31:10.949520   55395 out.go:177] * [auto-784554] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:31:10.950947   55395 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:31:10.951005   55395 notify.go:220] Checking for updates...
	I1024 20:31:10.952582   55395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:31:10.954130   55395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:31:10.955526   55395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:31:10.956771   55395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:31:10.958109   55395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:31:10.960058   55395 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:31:10.960201   55395 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:31:10.960362   55395 config.go:182] Loaded profile config "newest-cni-398707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:31:10.960457   55395 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:31:10.997800   55395 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 20:31:10.999066   55395 start.go:298] selected driver: kvm2
	I1024 20:31:10.999086   55395 start.go:902] validating driver "kvm2" against <nil>
	I1024 20:31:10.999100   55395 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:31:10.999811   55395 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:31:10.999877   55395 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:31:11.015284   55395 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:31:11.015327   55395 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 20:31:11.015559   55395 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:31:11.015598   55395 cni.go:84] Creating CNI manager for ""
	I1024 20:31:11.015610   55395 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:31:11.015622   55395 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 20:31:11.015630   55395 start_flags.go:323] config:
	{Name:auto-784554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:auto-784554 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:31:11.015821   55395 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:31:11.017792   55395 out.go:177] * Starting control plane node auto-784554 in cluster auto-784554
	I1024 20:31:11.191725   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:11.192186   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:11.192210   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:11.192162   55069 retry.go:31] will retry after 2.856587769s: waiting for machine to come up
	I1024 20:31:14.052110   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:14.052575   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:14.052605   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:14.052526   55069 retry.go:31] will retry after 3.151969008s: waiting for machine to come up
	I1024 20:31:11.019050   55395 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:31:11.019091   55395 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 20:31:11.019101   55395 cache.go:57] Caching tarball of preloaded images
	I1024 20:31:11.019181   55395 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:31:11.019195   55395 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 20:31:11.019311   55395 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/auto-784554/config.json ...
	I1024 20:31:11.019333   55395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/auto-784554/config.json: {Name:mk3b22f14eef1d65d0525831b9895f3890a8f2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:31:11.019485   55395 start.go:365] acquiring machines lock for auto-784554: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:31:17.205813   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:17.206305   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:17.206326   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:17.206267   55069 retry.go:31] will retry after 4.630540249s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:11:17 UTC, ends at Tue 2023-10-24 20:31:22 UTC. --
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.656414639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=00397c6e-3522-45c3-9da4-e98977a3cc51 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.657898883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a540c84c-ebcc-46f9-9eb0-8ba3396b6acb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.658361655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179482658347488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a540c84c-ebcc-46f9-9eb0-8ba3396b6acb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.659027646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=09f39f80-6692-4462-b950-0de19a1a0727 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.659112737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=09f39f80-6692-4462-b950-0de19a1a0727 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.659314108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=09f39f80-6692-4462-b950-0de19a1a0727 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.700580351Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=b9d7e988-8cf8-4577-854c-590d8e11ff89 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.700829856Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:38a424c5-7864-4116-b76f-3cf8ea7f8ce5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178319190981565,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:11:51.230787285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-6qq4r,Uid:e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178318896343
476,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:11:51.230794277Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a2550e91df16946c52de1a69cedeb9d1d2d2397f593cf40de32bbbf6d2b0bd2,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-pv9ww,Uid:6a642ef8-3b64-4cf1-b905-a3c7f510f29f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178315291890744,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-pv9ww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a642ef8-3b64-4cf1-b905-a3c7f510f29f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:11:51.
230806178Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e1351874-1865-4d9e-bb77-acd1eaf0023e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178311589371513,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-24T20:11:51.230807554Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&PodSandboxMetadata{Name:kube-proxy-thkqr,Uid:55c1a6e9-7a56-499f-a51c-41e4cbb1490d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178311585426514,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-7a56-499f-a51c-41e4cbb1490d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-10-24T20:11:51.230802066Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-867165,Uid:d620305d0efc571fe3c72b60af81484e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178304769861242,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.10:8443,kubernetes.io/config.hash: d620305d0efc571fe3c72b60af81484e,kubernetes.io/config.seen: 2023-10-24T20:11:44.228638431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&PodSandboxMetadata{
Name:etcd-embed-certs-867165,Uid:8e87f9e66dfb9145ef494be8265dd5a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178304745059051,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.10:2379,kubernetes.io/config.hash: 8e87f9e66dfb9145ef494be8265dd5a6,kubernetes.io/config.seen: 2023-10-24T20:11:44.228637397Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-867165,Uid:8a042a0bf4e39619ba37edb771d9c61c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178304717369733,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8a042a0bf4e39619ba37edb771d9c61c,kubernetes.io/config.seen: 2023-10-24T20:11:44.228633124Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-867165,Uid:f4ef7dee608c8f837f86f8a82041c976,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178304713705011,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837f86f8a82041c976,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f4ef7dee608c8f837f86f8a82041c
976,kubernetes.io/config.seen: 2023-10-24T20:11:44.228636467Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b9d7e988-8cf8-4577-854c-590d8e11ff89 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.701193189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=998a2bb1-fdf9-47ec-88f4-aa8dc71c2378 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.701299703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=998a2bb1-fdf9-47ec-88f4-aa8dc71c2378 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.701704548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56248f9d-50a0-41a0-a1e6-6691fedd38c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.701772644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56248f9d-50a0-41a0-a1e6-6691fedd38c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.701952021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56248f9d-50a0-41a0-a1e6-6691fedd38c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.704049282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=09e58a18-8670-4cf1-8bfb-6914a0be1b26 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.704404674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179482704393527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=09e58a18-8670-4cf1-8bfb-6914a0be1b26 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.705064865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c910e00-f0f6-45fc-8b70-8c882b9434d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.705151845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c910e00-f0f6-45fc-8b70-8c882b9434d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.705322108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c910e00-f0f6-45fc-8b70-8c882b9434d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.741626638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6629bac0-3d41-4c2c-912f-de02a867d7f5 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.741692031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6629bac0-3d41-4c2c-912f-de02a867d7f5 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.743187421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8fde6e9b-6f94-4ea6-8be3-7ac2dbab01e7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.743789360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179482743773129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8fde6e9b-6f94-4ea6-8be3-7ac2dbab01e7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.744882264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd2f09f3-d3f2-4e49-9214-08330a74deb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.744983072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd2f09f3-d3f2-4e49-9214-08330a74deb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:22 embed-certs-867165 crio[711]: time="2023-10-24 20:31:22.745196207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178344527784418,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-1865-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 3,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7033aab4c2133afc2f0545d40a04f014e210655391c56beb79b856380138a7,PodSandboxId:25869d82b77f0d0362587016670201cfb1fbda91a02992947e0bc7a61b66be1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178321362887232,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38a424c5-7864-4116-b76f-3cf8ea7f8ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 6e578840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0,PodSandboxId:f54e65b725cb62f9455c7f0f1d24d8df3bdadb8a2555b7649db6074cc1a4e5ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178319590590966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6qq4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40,},Annotations:map[string]string{io.kubernetes.container.hash: 52a084ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3,PodSandboxId:90f778b2d55f6c8e9f9d61b222d30e2d38bb5af07a9bf7c719acbfda07b99171,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178314716403834,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thkqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55c1a6e9-
7a56-499f-a51c-41e4cbb1490d,},Annotations:map[string]string{io.kubernetes.container.hash: 54fc3b61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382,PodSandboxId:2db5306e556fe4b454b044c40c382518fd9e15c86f852c7eedf2d0ff1748eaa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178312505442582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1351874-18
65-4d9e-bb77-acd1eaf0023e,},Annotations:map[string]string{io.kubernetes.container.hash: 87804a24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31,PodSandboxId:0e811808018d5196331b539838cbd673988b8aeda8933f9ff3c7024b78ec2516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178305991343819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ef7dee608c8f837
f86f8a82041c976,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2,PodSandboxId:330793c8976de0efa5fa88c059d2ccea78dcabb3b8d964e30da6e84158a88e33,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178305806433116,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e87f9e66dfb9145ef494be8265dd5a6,},Annotations:map[string]string{io
.kubernetes.container.hash: c79c50a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc,PodSandboxId:b04361eae724627037166460d4491f4b0f59f0ab593e920843ce0c27b664d0fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178305300030394,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a042a0bf4e39619ba37edb771d9c61c,},Annota
tions:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251,PodSandboxId:744cbeaf8172d0f1c3131377996c23645eeb8927d0ccaaafb8382311200402f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178305322862399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-867165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d620305d0efc571fe3c72b60af81484e,},Annotations:map[
string]string{io.kubernetes.container.hash: c8acb279,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd2f09f3-d3f2-4e49-9214-08330a74deb5 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26f391c93fe16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   2db5306e556fe       storage-provisioner
	9c7033aab4c21       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   25869d82b77f0       busybox
	9e2b63eae7db7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   f54e65b725cb6       coredns-5dd5756b68-6qq4r
	a9906107f32c1       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      19 minutes ago      Running             kube-proxy                1                   90f778b2d55f6       kube-proxy-thkqr
	2b61033b8afd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   2db5306e556fe       storage-provisioner
	d23e68e4d4a23       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      19 minutes ago      Running             kube-scheduler            1                   0e811808018d5       kube-scheduler-embed-certs-867165
	82b51425efb50       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   330793c8976de       etcd-embed-certs-867165
	7217044d2e039       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      19 minutes ago      Running             kube-apiserver            1                   744cbeaf8172d       kube-apiserver-embed-certs-867165
	e159067fdfc42       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      19 minutes ago      Running             kube-controller-manager   1                   b04361eae7246       kube-controller-manager-embed-certs-867165
	
	* 
	* ==> coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57575 - 56234 "HINFO IN 4712219434935555436.3172398071474408327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013447647s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-867165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-867165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=embed-certs-867165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_02_59_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:02:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-867165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:31:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:27:43 +0000   Tue, 24 Oct 2023 20:02:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:27:43 +0000   Tue, 24 Oct 2023 20:02:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:27:43 +0000   Tue, 24 Oct 2023 20:02:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:27:43 +0000   Tue, 24 Oct 2023 20:12:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.10
	  Hostname:    embed-certs-867165
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 602ce82d6b5a46b4bc42fbc229933dff
	  System UUID:                602ce82d-6b5a-46b4-bc42-fbc229933dff
	  Boot ID:                    d24d6ea4-501f-4b37-a172-fe947a75312c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-6qq4r                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-867165                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-867165             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-867165    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-thkqr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-867165             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-pv9ww               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node embed-certs-867165 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-867165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-867165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-867165 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node embed-certs-867165 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-867165 event: Registered Node embed-certs-867165 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-867165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-867165 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-867165 event: Registered Node embed-certs-867165 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.327496] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.565243] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151969] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.440297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.446921] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.097748] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.138995] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.122858] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.255571] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.052062] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[Oct24 20:12] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] <==
	* {"level":"info","ts":"2023-10-24T20:11:54.434815Z","caller":"traceutil/trace.go:171","msg":"trace[607005491] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:562; }","duration":"984.993967ms","start":"2023-10-24T20:11:53.449814Z","end":"2023-10-24T20:11:54.434808Z","steps":["trace[607005491] 'agreement among raft nodes before linearized reading'  (duration: 984.914579ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:11:54.434836Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:53.449804Z","time spent":"985.026855ms","remote":"127.0.0.1:56264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"info","ts":"2023-10-24T20:11:54.711883Z","caller":"traceutil/trace.go:171","msg":"trace[331690051] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"270.662001ms","start":"2023-10-24T20:11:54.441201Z","end":"2023-10-24T20:11:54.711863Z","steps":["trace[331690051] 'process raft request'  (duration: 270.457886ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:54.712925Z","caller":"traceutil/trace.go:171","msg":"trace[496078760] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:597; }","duration":"264.238262ms","start":"2023-10-24T20:11:54.448674Z","end":"2023-10-24T20:11:54.712912Z","steps":["trace[496078760] 'read index received'  (duration: 264.233681ms)","trace[496078760] 'applied index is now lower than readState.Index'  (duration: 3.328µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:54.71316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.770491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-10-24T20:11:54.713221Z","caller":"traceutil/trace.go:171","msg":"trace[1026459042] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:563; }","duration":"264.843318ms","start":"2023-10-24T20:11:54.448368Z","end":"2023-10-24T20:11:54.713211Z","steps":["trace[1026459042] 'agreement among raft nodes before linearized reading'  (duration: 264.734782ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:54.922297Z","caller":"traceutil/trace.go:171","msg":"trace[835528844] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"186.27579ms","start":"2023-10-24T20:11:54.736007Z","end":"2023-10-24T20:11:54.922283Z","steps":["trace[835528844] 'process raft request'  (duration: 186.241909ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:54.922696Z","caller":"traceutil/trace.go:171","msg":"trace[1282737226] transaction","detail":"{read_only:false; response_revision:564; number_of_response:1; }","duration":"469.022718ms","start":"2023-10-24T20:11:54.453662Z","end":"2023-10-24T20:11:54.922684Z","steps":["trace[1282737226] 'process raft request'  (duration: 448.194095ms)","trace[1282737226] 'compare'  (duration: 20.321745ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:54.922875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:54.453648Z","time spent":"469.074602ms","remote":"127.0.0.1:56260","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3544,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:513 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3490 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2023-10-24T20:11:54.922956Z","caller":"traceutil/trace.go:171","msg":"trace[1017741281] linearizableReadLoop","detail":"{readStateIndex:598; appliedIndex:597; }","duration":"209.866139ms","start":"2023-10-24T20:11:54.713084Z","end":"2023-10-24T20:11:54.922951Z","steps":["trace[1017741281] 'read index received'  (duration: 188.776089ms)","trace[1017741281] 'applied index is now lower than readState.Index'  (duration: 21.089226ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:54.923122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"470.898462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-10-24T20:11:54.923185Z","caller":"traceutil/trace.go:171","msg":"trace[1921888367] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:565; }","duration":"470.962695ms","start":"2023-10-24T20:11:54.452215Z","end":"2023-10-24T20:11:54.923178Z","steps":["trace[1921888367] 'agreement among raft nodes before linearized reading'  (duration: 470.878777ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:11:54.923212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-24T20:11:54.452206Z","time spent":"470.996961ms","remote":"127.0.0.1:56264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":232,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"warn","ts":"2023-10-24T20:11:54.923313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.444513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.1791242d3e84c3ab\" ","response":"range_response_count:1 size:880"}
	{"level":"info","ts":"2023-10-24T20:11:54.923327Z","caller":"traceutil/trace.go:171","msg":"trace[778577011] range","detail":"{range_begin:/registry/events/default/busybox.1791242d3e84c3ab; range_end:; response_count:1; response_revision:565; }","duration":"197.459599ms","start":"2023-10-24T20:11:54.725864Z","end":"2023-10-24T20:11:54.923323Z","steps":["trace[778577011] 'agreement among raft nodes before linearized reading'  (duration: 197.428045ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:57.293362Z","caller":"traceutil/trace.go:171","msg":"trace[2030366714] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:625; }","duration":"125.583884ms","start":"2023-10-24T20:11:57.167763Z","end":"2023-10-24T20:11:57.293347Z","steps":["trace[2030366714] 'read index received'  (duration: 125.234487ms)","trace[2030366714] 'applied index is now lower than readState.Index'  (duration: 348.777µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:11:57.293721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.955308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-867165\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2023-10-24T20:11:57.294066Z","caller":"traceutil/trace.go:171","msg":"trace[1451677961] range","detail":"{range_begin:/registry/minions/embed-certs-867165; range_end:; response_count:1; response_revision:585; }","duration":"126.312347ms","start":"2023-10-24T20:11:57.167743Z","end":"2023-10-24T20:11:57.294055Z","steps":["trace[1451677961] 'agreement among raft nodes before linearized reading'  (duration: 125.735556ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:11:57.295078Z","caller":"traceutil/trace.go:171","msg":"trace[1284424414] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"128.211691ms","start":"2023-10-24T20:11:57.166857Z","end":"2023-10-24T20:11:57.295069Z","steps":["trace[1284424414] 'process raft request'  (duration: 126.256415ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:21:49.193866Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":855}
	{"level":"info","ts":"2023-10-24T20:21:49.203013Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":855,"took":"8.331968ms","hash":26409532}
	{"level":"info","ts":"2023-10-24T20:21:49.203084Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":26409532,"revision":855,"compact-revision":-1}
	{"level":"info","ts":"2023-10-24T20:26:49.201944Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2023-10-24T20:26:49.20477Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1098,"took":"1.86147ms","hash":4243741566}
	{"level":"info","ts":"2023-10-24T20:26:49.204867Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4243741566,"revision":1098,"compact-revision":855}
	
	* 
	* ==> kernel <==
	*  20:31:23 up 20 min,  0 users,  load average: 0.06, 0.20, 0.17
	Linux embed-certs-867165 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] <==
	* W1024 20:26:51.933030       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:26:51.933239       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:26:51.933279       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:26:51.933034       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:26:51.933423       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:26:51.935393       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:27:50.811907       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:27:51.934010       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:27:51.934149       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:27:51.934177       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:27:51.936389       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:27:51.936444       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:27:51.936483       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:28:50.811695       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 20:29:50.811441       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:29:51.935250       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:29:51.935370       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:29:51.935382       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:29:51.937594       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:29:51.937696       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:29:51.937726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:30:50.812007       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] <==
	* I1024 20:25:36.242142       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:26:05.739114       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:26:06.250698       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:26:35.745856       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:26:36.259352       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:27:05.759235       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:27:06.272459       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:27:35.764968       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:27:36.282231       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:28:05.771947       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:28:06.290962       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:28:15.279979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="320.445µs"
	I1024 20:28:27.278682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="264.618µs"
	E1024 20:28:35.777999       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:28:36.300435       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:29:05.784944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:29:06.308626       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:29:35.791070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:29:36.317013       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:30:05.796814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:30:06.325904       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:30:35.802649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:30:36.334775       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:31:05.809201       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:31:06.345257       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] <==
	* I1024 20:11:55.096724       1 server_others.go:69] "Using iptables proxy"
	I1024 20:11:55.107840       1 node.go:141] Successfully retrieved node IP: 192.168.72.10
	I1024 20:11:55.161629       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:11:55.161684       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:11:55.164723       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:11:55.164801       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:11:55.164988       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:11:55.165041       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:11:55.166111       1 config.go:188] "Starting service config controller"
	I1024 20:11:55.166195       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:11:55.166225       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:11:55.166231       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:11:55.170407       1 config.go:315] "Starting node config controller"
	I1024 20:11:55.170450       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:11:55.266339       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:11:55.266486       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:11:55.274774       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] <==
	* I1024 20:11:48.465150       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:11:50.864932       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:11:50.865102       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:11:50.865133       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:11:50.865157       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:11:50.903703       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:11:50.903794       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:11:50.907773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:11:50.907926       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:11:50.912941       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:11:50.913024       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:11:51.009657       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:11:17 UTC, ends at Tue 2023-10-24 20:31:23 UTC. --
	Oct 24 20:28:42 embed-certs-867165 kubelet[917]: E1024 20:28:42.268274     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:28:44 embed-certs-867165 kubelet[917]: E1024 20:28:44.279738     917 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:28:44 embed-certs-867165 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:28:44 embed-certs-867165 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:28:44 embed-certs-867165 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:28:57 embed-certs-867165 kubelet[917]: E1024 20:28:57.261725     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:29:09 embed-certs-867165 kubelet[917]: E1024 20:29:09.263033     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:29:24 embed-certs-867165 kubelet[917]: E1024 20:29:24.263619     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:29:35 embed-certs-867165 kubelet[917]: E1024 20:29:35.262042     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:29:44 embed-certs-867165 kubelet[917]: E1024 20:29:44.278789     917 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:29:44 embed-certs-867165 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:29:44 embed-certs-867165 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:29:44 embed-certs-867165 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:29:47 embed-certs-867165 kubelet[917]: E1024 20:29:47.262363     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:30:01 embed-certs-867165 kubelet[917]: E1024 20:30:01.262635     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:30:12 embed-certs-867165 kubelet[917]: E1024 20:30:12.261959     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:30:23 embed-certs-867165 kubelet[917]: E1024 20:30:23.262445     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:30:37 embed-certs-867165 kubelet[917]: E1024 20:30:37.263192     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:30:44 embed-certs-867165 kubelet[917]: E1024 20:30:44.278467     917 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:30:44 embed-certs-867165 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:30:44 embed-certs-867165 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:30:44 embed-certs-867165 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:30:51 embed-certs-867165 kubelet[917]: E1024 20:30:51.262431     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:31:04 embed-certs-867165 kubelet[917]: E1024 20:31:04.265128     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	Oct 24 20:31:15 embed-certs-867165 kubelet[917]: E1024 20:31:15.262150     917 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pv9ww" podUID="6a642ef8-3b64-4cf1-b905-a3c7f510f29f"
	
	* 
	* ==> storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] <==
	* I1024 20:12:24.660342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:12:24.679415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:12:24.680556       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:12:42.102644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:12:42.102879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-867165_989b48c3-31de-413c-b8a0-62d1bb8e7055!
	I1024 20:12:42.103326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c01928cf-4170-49fd-8f37-2d3fc3f03c41", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-867165_989b48c3-31de-413c-b8a0-62d1bb8e7055 became leader
	I1024 20:12:42.204844       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-867165_989b48c3-31de-413c-b8a0-62d1bb8e7055!
	
	* 
	* ==> storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] <==
	* I1024 20:11:53.468434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 20:12:23.471358       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-867165 -n embed-certs-867165
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-867165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pv9ww
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-867165 describe pod metrics-server-57f55c9bc5-pv9ww
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-867165 describe pod metrics-server-57f55c9bc5-pv9ww: exit status 1 (72.052567ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pv9ww" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-867165 describe pod metrics-server-57f55c9bc5-pv9ww: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (363.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (479.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1024 20:26:00.584328   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:33:38.942054961 +0000 UTC m=+5580.707533961
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-643126 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.59µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-643126 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643126 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-643126 logs -n 25: (1.371774867s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-784554 sudo containerd                       | auto-784554           | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-784554 sudo systemctl                        | auto-784554           | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-784554 sudo systemctl                        | auto-784554           | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-784554 sudo find                             | auto-784554           | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p auto-784554 sudo crio                             | auto-784554           | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| delete  | -p auto-784554                                       | auto-784554           | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	| ssh     | -p kindnet-784554 sudo docker                        | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| start   | -p custom-flannel-784554                             | custom-flannel-784554 | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo cat                           | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-784554 sudo                               | kindnet-784554        | jenkins | v1.31.2 | 24 Oct 23 20:33 UTC | 24 Oct 23 20:33 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:33:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:33:37.348513   59497 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:33:37.348710   59497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:33:37.348738   59497 out.go:309] Setting ErrFile to fd 2...
	I1024 20:33:37.348755   59497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:33:37.349084   59497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:33:37.349941   59497 out.go:303] Setting JSON to false
	I1024 20:33:37.351396   59497 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7915,"bootTime":1698171702,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:33:37.351503   59497 start.go:138] virtualization: kvm guest
	I1024 20:33:37.354004   59497 out.go:177] * [custom-flannel-784554] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:33:37.355577   59497 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:33:37.355593   59497 notify.go:220] Checking for updates...
	I1024 20:33:37.357639   59497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:33:37.359667   59497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:33:37.362440   59497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:33:37.364831   59497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:33:37.366215   59497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:33:37.368119   59497 config.go:182] Loaded profile config "calico-784554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:33:37.368295   59497 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:33:37.368425   59497 config.go:182] Loaded profile config "kindnet-784554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:33:37.368532   59497 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:33:37.404663   59497 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 20:33:37.406221   59497 start.go:298] selected driver: kvm2
	I1024 20:33:37.406240   59497 start.go:902] validating driver "kvm2" against <nil>
	I1024 20:33:37.406254   59497 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:33:37.407041   59497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:33:37.407101   59497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:33:37.422528   59497 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:33:37.422588   59497 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 20:33:37.422886   59497 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:33:37.422967   59497 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1024 20:33:37.422996   59497 start_flags.go:318] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1024 20:33:37.423009   59497 start_flags.go:323] config:
	{Name:custom-flannel-784554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:custom-flannel-784554 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s G
PUs:}
	I1024 20:33:37.423205   59497 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:33:37.424964   59497 out.go:177] * Starting control plane node custom-flannel-784554 in cluster custom-flannel-784554
	I1024 20:33:34.186663   57248 main.go:141] libmachine: (calico-784554) DBG | domain calico-784554 has defined MAC address 52:54:00:c4:32:0a in network mk-calico-784554
	I1024 20:33:34.187084   57248 main.go:141] libmachine: (calico-784554) DBG | unable to find current IP address of domain calico-784554 in network mk-calico-784554
	I1024 20:33:34.187103   57248 main.go:141] libmachine: (calico-784554) DBG | I1024 20:33:34.187062   57271 retry.go:31] will retry after 3.759238648s: waiting for machine to come up
	I1024 20:33:37.950209   57248 main.go:141] libmachine: (calico-784554) DBG | domain calico-784554 has defined MAC address 52:54:00:c4:32:0a in network mk-calico-784554
	I1024 20:33:37.950711   57248 main.go:141] libmachine: (calico-784554) DBG | unable to find current IP address of domain calico-784554 in network mk-calico-784554
	I1024 20:33:37.950733   57248 main.go:141] libmachine: (calico-784554) DBG | I1024 20:33:37.950647   57271 retry.go:31] will retry after 5.288499279s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:11:37 UTC, ends at Tue 2023-10-24 20:33:39 UTC. --
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.729760336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7185caa9-b222-40da-ab6b-cb015bf00a19 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.731560815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=08690cdc-7eaf-47d6-bbe9-4ece8e494d17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.732171583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179619732155245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=08690cdc-7eaf-47d6-bbe9-4ece8e494d17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.732706155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62203f87-40c7-408d-9727-517b43503e66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.732809965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62203f87-40c7-408d-9727-517b43503e66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.733110892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62203f87-40c7-408d-9727-517b43503e66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.779564343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e678287c-499c-4ce3-a627-d6449b376496 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.779653917Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e678287c-499c-4ce3-a627-d6449b376496 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.781261364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b4b8e2f6-3840-40ce-b3c7-4f2ae3d8c393 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.781727554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179619781711684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b4b8e2f6-3840-40ce-b3c7-4f2ae3d8c393 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.783086665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b2c533de-ea05-44ee-a606-697c8c1c4ae4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.783182952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b2c533de-ea05-44ee-a606-697c8c1c4ae4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.783388207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b2c533de-ea05-44ee-a606-697c8c1c4ae4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.785110655Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=6ded852e-fd99-4278-b0dc-14ae857d2cde name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.785389322Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&PodSandboxMetadata{Name:busybox,Uid:65a34d3b-218a-456c-8c23-ec8d153cbbc0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178340839299518,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:12:12.847723362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-mklhw,Uid:53629562-a50d-4ca5-80ab-baed4852b4d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169817
8340528326113,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:12:12.847739963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09d3ab03b89492bdc873b3635aa67a4bbdc817dc49dd228ffc5ecb90b96df365,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-lmxdt,Uid:9b235003-ac4a-491b-af2e-9af54e79922c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178336937617386,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-lmxdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b235003-ac4a-491b-af2e-9af54e79922c,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24
T20:12:12.847736288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:53920350-b0f4-4486-88a8-b97ed6c1cf17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178333207502326,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-24T20:12:12.847737900Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&PodSandboxMetadata{Name:kube-proxy-x4zbh,Uid:a47f6c48-c4de-4feb-a3ea-8874c980d263,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178333202235197,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a47f6c48-c4de-4feb-a3ea-8874c980d263,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kub
ernetes.io/config.seen: 2023-10-24T20:12:12.847744916Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-643126,Uid:9e04e11a7b4eef363358253e1bcb9bbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178326400645748,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.148:2379,kubernetes.io/config.hash: 9e04e11a7b4eef363358253e1bcb9bbb,kubernetes.io/config.seen: 2023-10-24T20:12:05.846163034Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&PodSandboxMetadata{Name:k
ube-controller-manager-default-k8s-diff-port-643126,Uid:f6a4b6de4f1fe8085ff32bfcacd2354a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178326390731168,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6a4b6de4f1fe8085ff32bfcacd2354a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f6a4b6de4f1fe8085ff32bfcacd2354a,kubernetes.io/config.seen: 2023-10-24T20:12:05.846155337Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-643126,Uid:09c0a5a5ea38cfcbc96a50f8fa8b28db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178326371160330,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.148:8444,kubernetes.io/config.hash: 09c0a5a5ea38cfcbc96a50f8fa8b28db,kubernetes.io/config.seen: 2023-10-24T20:12:05.846164336Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-643126,Uid:e419dd8a9426a70be6e020ac0e950e19,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178326351474958,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e419dd8a9426a70be6e020ac0e950e19,tier: control-pl
ane,},Annotations:map[string]string{kubernetes.io/config.hash: e419dd8a9426a70be6e020ac0e950e19,kubernetes.io/config.seen: 2023-10-24T20:12:05.846161567Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6ded852e-fd99-4278-b0dc-14ae857d2cde name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.785971888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b4ab211-9ae2-4b62-ad76-c64e9c11c241 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.786137754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b4ab211-9ae2-4b62-ad76-c64e9c11c241 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.786330149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b4ab211-9ae2-4b62-ad76-c64e9c11c241 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.827255657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9bff0749-2d45-4aca-adcd-503aaf1addf9 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.827375330Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9bff0749-2d45-4aca-adcd-503aaf1addf9 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.828702212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b08693bb-ff06-4e9a-a112-b41f0e1e034d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.829156329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179619829143410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b08693bb-ff06-4e9a-a112-b41f0e1e034d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.829838448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a9da0d0-13a5-4fab-b6ef-236c4b18a81d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.829908812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a9da0d0-13a5-4fab-b6ef-236c4b18a81d name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:33:39 default-k8s-diff-port-643126 crio[726]: time="2023-10-24 20:33:39.830228344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178365177645624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0fca5db1c6e6cd414f6e628eb656f54fea10276fdbed1480e151c2b78ccaa2,PodSandboxId:2068401dd05a9d5f7d28baf7bce29314378d331360632dd9ac6d5c7d9fa16f0c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178342875716149,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65a34d3b-218a-456c-8c23-ec8d153cbbc0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4968ce,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc,PodSandboxId:3e8a8afb8a5e56348c944e709ad020f062e60c4c354826b59b020a9bb30b4ab6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1698178341324178540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mklhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53629562-a50d-4ca5-80ab-baed4852b4d7,},Annotations:map[string]string{io.kubernetes.container.hash: 47d386ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3,PodSandboxId:fb5a41cb7e24643ce766c4da66a4fdc8be8a5200dc2f1c9875ff1055811d792b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1698178334232828396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 53920350-b0f4-4486-88a8-b97ed6c1cf17,},Annotations:map[string]string{io.kubernetes.container.hash: 3a3b5977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139,PodSandboxId:a44b1838edc1b67b0c2a39fc2c9ffc3d0030a856acdcf935918e0b11d16572dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1698178333997550171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4zbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
47f6c48-c4de-4feb-a3ea-8874c980d263,},Annotations:map[string]string{io.kubernetes.container.hash: 33bcdd1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591,PodSandboxId:38c35866ffc89e09cf124615c84b76d9bd1995a227016a5a3e9b7ec3a5e6f28a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1698178327498291438,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: e419dd8a9426a70be6e020ac0e950e19,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf,PodSandboxId:9333f19493abfe672b5f468de087fc27e69c4dd7b3bd12390d48b7978d48d5b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1698178327031893161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e04e11a7b4eef363358253e1bcb9bbb,},An
notations:map[string]string{io.kubernetes.container.hash: 3f303518,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928,PodSandboxId:de6c3901c21d62e93a43ad72a4e058f4436cc931f8fccc032f22f277e21b961b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1698178327084269912,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c0a5a5ea38cfcbc96a50f8fa8b28db,},An
notations:map[string]string{io.kubernetes.container.hash: b0b33473,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687,PodSandboxId:d22c3191e55246293aac485dd5eed29f79c4d428394f317268e523b152ee38f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1698178327129366826,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-643126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
6a4b6de4f1fe8085ff32bfcacd2354a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a9da0d0-13a5-4fab-b6ef-236c4b18a81d name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0198578b96c6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   fb5a41cb7e246       storage-provisioner
	2a0fca5db1c6e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   2068401dd05a9       busybox
	5520a46163d9a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   3e8a8afb8a5e5       coredns-5dd5756b68-mklhw
	94c1196dd672c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   fb5a41cb7e246       storage-provisioner
	4c95bbf4f285b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      21 minutes ago      Running             kube-proxy                1                   a44b1838edc1b       kube-proxy-x4zbh
	742064a59716b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      21 minutes ago      Running             kube-scheduler            1                   38c35866ffc89       kube-scheduler-default-k8s-diff-port-643126
	7e5201f16577b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      21 minutes ago      Running             kube-controller-manager   1                   d22c3191e5524       kube-controller-manager-default-k8s-diff-port-643126
	cc891cea4cf91       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      21 minutes ago      Running             kube-apiserver            1                   de6c3901c21d6       kube-apiserver-default-k8s-diff-port-643126
	297b00416e9d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   9333f19493abf       etcd-default-k8s-diff-port-643126
	
	* 
	* ==> coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45270 - 44506 "HINFO IN 4684813267403133358.2973808512917307922. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010024308s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-643126
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-643126
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=default-k8s-diff-port-643126
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_04_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:04:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-643126
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:33:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:33:08 +0000   Tue, 24 Oct 2023 20:04:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:33:08 +0000   Tue, 24 Oct 2023 20:04:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:33:08 +0000   Tue, 24 Oct 2023 20:04:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:33:08 +0000   Tue, 24 Oct 2023 20:12:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.148
	  Hostname:    default-k8s-diff-port-643126
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 b71eed24e60a4ca1869c2bb0fec81460
	  System UUID:                b71eed24-e60a-4ca1-869c-2bb0fec81460
	  Boot ID:                    d3527ccf-a3b5-4214-80ca-d143812274e4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-mklhw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-643126                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-643126             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-643126    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-x4zbh                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-643126             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-lmxdt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-643126 event: Registered Node default-k8s-diff-port-643126 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-643126 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-643126 event: Registered Node default-k8s-diff-port-643126 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076321] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.429048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.161299] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138068] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.499490] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.257405] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.116874] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.164731] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.113474] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.258182] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[Oct24 20:12] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +14.990963] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] <==
	* {"level":"info","ts":"2023-10-24T20:22:10.709627Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2023-10-24T20:22:10.712308Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":851,"took":"2.380512ms","hash":391525199}
	{"level":"info","ts":"2023-10-24T20:22:10.712368Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":391525199,"revision":851,"compact-revision":-1}
	{"level":"info","ts":"2023-10-24T20:27:10.717542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1093}
	{"level":"info","ts":"2023-10-24T20:27:10.720207Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1093,"took":"1.915044ms","hash":4115797229}
	{"level":"info","ts":"2023-10-24T20:27:10.720357Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4115797229,"revision":1093,"compact-revision":851}
	{"level":"info","ts":"2023-10-24T20:31:31.159544Z","caller":"traceutil/trace.go:171","msg":"trace[1680158570] transaction","detail":"{read_only:false; response_revision:1547; number_of_response:1; }","duration":"123.416442ms","start":"2023-10-24T20:31:31.036091Z","end":"2023-10-24T20:31:31.159508Z","steps":["trace[1680158570] 'process raft request'  (duration: 123.26685ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:31:31.42746Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.278425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T20:31:31.427568Z","caller":"traceutil/trace.go:171","msg":"trace[676506952] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1547; }","duration":"220.477353ms","start":"2023-10-24T20:31:31.207074Z","end":"2023-10-24T20:31:31.427551Z","steps":["trace[676506952] 'range keys from in-memory index tree'  (duration: 220.17309ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:32:10.725501Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1337}
	{"level":"info","ts":"2023-10-24T20:32:10.72746Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1337,"took":"1.690727ms","hash":2243872881}
	{"level":"info","ts":"2023-10-24T20:32:10.727534Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2243872881,"revision":1337,"compact-revision":1093}
	{"level":"info","ts":"2023-10-24T20:32:25.789548Z","caller":"traceutil/trace.go:171","msg":"trace[291715336] linearizableReadLoop","detail":"{readStateIndex:1880; appliedIndex:1879; }","duration":"104.276491ms","start":"2023-10-24T20:32:25.685228Z","end":"2023-10-24T20:32:25.789505Z","steps":["trace[291715336] 'read index received'  (duration: 104.078396ms)","trace[291715336] 'applied index is now lower than readState.Index'  (duration: 197.505µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T20:32:25.789779Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.626067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-10-24T20:32:25.789832Z","caller":"traceutil/trace.go:171","msg":"trace[500006983] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1593; }","duration":"104.726132ms","start":"2023-10-24T20:32:25.685099Z","end":"2023-10-24T20:32:25.789825Z","steps":["trace[500006983] 'agreement among raft nodes before linearized reading'  (duration: 104.593843ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:32:25.790115Z","caller":"traceutil/trace.go:171","msg":"trace[1346186707] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"190.302431ms","start":"2023-10-24T20:32:25.599803Z","end":"2023-10-24T20:32:25.790106Z","steps":["trace[1346186707] 'process raft request'  (duration: 189.543505ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:32:28.009981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.538133ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3273707237820469067 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1593 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-24T20:32:28.010627Z","caller":"traceutil/trace.go:171","msg":"trace[123480722] transaction","detail":"{read_only:false; response_revision:1595; number_of_response:1; }","duration":"208.963637ms","start":"2023-10-24T20:32:27.801644Z","end":"2023-10-24T20:32:28.010608Z","steps":["trace[123480722] 'process raft request'  (duration: 87.660244ms)","trace[123480722] 'compare'  (duration: 119.453137ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T20:32:30.311789Z","caller":"traceutil/trace.go:171","msg":"trace[1401233748] linearizableReadLoop","detail":"{readStateIndex:1884; appliedIndex:1883; }","duration":"227.596783ms","start":"2023-10-24T20:32:30.084173Z","end":"2023-10-24T20:32:30.31177Z","steps":["trace[1401233748] 'read index received'  (duration: 227.359258ms)","trace[1401233748] 'applied index is now lower than readState.Index'  (duration: 237.006µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T20:32:30.312075Z","caller":"traceutil/trace.go:171","msg":"trace[872909658] transaction","detail":"{read_only:false; response_revision:1596; number_of_response:1; }","duration":"293.211262ms","start":"2023-10-24T20:32:30.018768Z","end":"2023-10-24T20:32:30.311979Z","steps":["trace[872909658] 'process raft request'  (duration: 292.817312ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:32:30.312294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.689897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T20:32:30.312394Z","caller":"traceutil/trace.go:171","msg":"trace[2131732865] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1596; }","duration":"105.798117ms","start":"2023-10-24T20:32:30.20657Z","end":"2023-10-24T20:32:30.312368Z","steps":["trace[2131732865] 'agreement among raft nodes before linearized reading'  (duration: 105.665771ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T20:32:30.312548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.384913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T20:32:30.312605Z","caller":"traceutil/trace.go:171","msg":"trace[637342304] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1596; }","duration":"228.446384ms","start":"2023-10-24T20:32:30.084151Z","end":"2023-10-24T20:32:30.312597Z","steps":["trace[637342304] 'agreement among raft nodes before linearized reading'  (duration: 228.369508ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T20:32:50.563298Z","caller":"traceutil/trace.go:171","msg":"trace[2020558999] transaction","detail":"{read_only:false; response_revision:1612; number_of_response:1; }","duration":"115.190742ms","start":"2023-10-24T20:32:50.448089Z","end":"2023-10-24T20:32:50.56328Z","steps":["trace[2020558999] 'process raft request'  (duration: 115.048147ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:33:40 up 22 min,  0 users,  load average: 0.22, 0.21, 0.21
	Linux default-k8s-diff-port-643126 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] <==
	* W1024 20:30:13.421612       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:30:13.421715       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:30:13.421750       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:31:12.268771       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 20:32:12.268925       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:32:12.425979       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:32:12.426168       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:32:12.426606       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:32:13.426496       1 handler_proxy.go:93] no RequestInfo found in the context
	W1024 20:32:13.427433       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:32:13.427491       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:32:13.427502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1024 20:32:13.427704       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:32:13.429213       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:33:12.269134       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:33:13.427646       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:33:13.427727       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:33:13.427742       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:33:13.430195       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:33:13.430342       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:33:13.430358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] <==
	* I1024 20:27:55.812241       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:28:25.163472       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:28:25.822377       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:28:45.918901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="286.547µs"
	E1024 20:28:55.168945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:28:55.832123       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:28:57.916352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="172.249µs"
	E1024 20:29:25.174806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:29:25.843466       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:29:55.180636       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:29:55.851897       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:30:25.193127       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:30:25.860497       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:30:55.200280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:30:55.869822       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:31:25.207763       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:31:25.881463       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:31:55.213711       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:31:55.897316       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:32:25.225809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:32:25.913850       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:32:55.231983       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:32:55.923413       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:33:25.238853       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:33:25.937483       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] <==
	* I1024 20:12:14.462772       1 server_others.go:69] "Using iptables proxy"
	I1024 20:12:14.531364       1 node.go:141] Successfully retrieved node IP: 192.168.61.148
	I1024 20:12:14.812227       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:12:14.812274       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:12:14.816273       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:12:14.816411       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:12:14.817087       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:12:14.817651       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:12:14.818749       1 config.go:188] "Starting service config controller"
	I1024 20:12:14.818799       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:12:14.818820       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:12:14.818823       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:12:14.821818       1 config.go:315] "Starting node config controller"
	I1024 20:12:14.821852       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:12:14.919951       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:12:14.920114       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:12:14.923109       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] <==
	* I1024 20:12:09.445279       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:12:12.355464       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:12:12.355542       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:12:12.355570       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:12:12.355594       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:12:12.440118       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:12:12.440206       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:12:12.446848       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:12:12.446962       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:12:12.448347       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:12:12.446978       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:12:12.549301       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:11:37 UTC, ends at Tue 2023-10-24 20:33:40 UTC. --
	Oct 24 20:31:05 default-k8s-diff-port-643126 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:31:11 default-k8s-diff-port-643126 kubelet[931]: E1024 20:31:11.900084     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:31:24 default-k8s-diff-port-643126 kubelet[931]: E1024 20:31:24.900183     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:31:39 default-k8s-diff-port-643126 kubelet[931]: E1024 20:31:39.900504     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:31:53 default-k8s-diff-port-643126 kubelet[931]: E1024 20:31:53.899720     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:32:05 default-k8s-diff-port-643126 kubelet[931]: E1024 20:32:05.919306     931 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:32:05 default-k8s-diff-port-643126 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:32:05 default-k8s-diff-port-643126 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:32:05 default-k8s-diff-port-643126 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:32:05 default-k8s-diff-port-643126 kubelet[931]: E1024 20:32:05.923783     931 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Oct 24 20:32:07 default-k8s-diff-port-643126 kubelet[931]: E1024 20:32:07.900106     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:32:21 default-k8s-diff-port-643126 kubelet[931]: E1024 20:32:21.899796     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:32:35 default-k8s-diff-port-643126 kubelet[931]: E1024 20:32:35.900246     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:32:49 default-k8s-diff-port-643126 kubelet[931]: E1024 20:32:49.900119     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:33:01 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:01.901336     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:33:06 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:06.019321     931 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:33:06 default-k8s-diff-port-643126 kubelet[931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:33:06 default-k8s-diff-port-643126 kubelet[931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:33:06 default-k8s-diff-port-643126 kubelet[931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:33:14 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:14.900381     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:33:25 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:25.901283     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	Oct 24 20:33:37 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:37.909750     931 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:33:37 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:37.909792     931 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:33:37 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:37.909982     931 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zgrtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-lmxdt_kube-system(9b235003-ac4a-491b-af2e-9af54e79922c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:33:37 default-k8s-diff-port-643126 kubelet[931]: E1024 20:33:37.910108     931 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-lmxdt" podUID="9b235003-ac4a-491b-af2e-9af54e79922c"
	
	* 
	* ==> storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] <==
	* I1024 20:12:45.332861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:12:45.351280       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:12:45.351457       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:13:02.753900       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:13:02.754214       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643126_4be52ba6-9b59-46c1-96ca-19a76a5b2a3d!
	I1024 20:13:02.756690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38aa9f7b-a64f-4486-8c9a-e6ebab2efbcb", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-643126_4be52ba6-9b59-46c1-96ca-19a76a5b2a3d became leader
	I1024 20:13:02.854462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643126_4be52ba6-9b59-46c1-96ca-19a76a5b2a3d!
	
	* 
	* ==> storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] <==
	* I1024 20:12:14.585865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 20:12:44.595803       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-lmxdt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 describe pod metrics-server-57f55c9bc5-lmxdt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-643126 describe pod metrics-server-57f55c9bc5-lmxdt: exit status 1 (65.099039ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-lmxdt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-643126 describe pod metrics-server-57f55c9bc5-lmxdt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (479.33s)
E1024 20:36:00.584395   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 20:36:01.490573   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (244.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1024 20:27:23.631034   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 20:28:10.558374   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 20:28:19.104690   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014826 -n no-preload-014826
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:31:07.824155727 +0000 UTC m=+5429.589634723
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-014826 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-014826 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.622µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-014826 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-014826 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-014826 logs -n 25: (1.394495211s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-145190                              | stopped-upgrade-145190       | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:30 UTC | 24 Oct 23 20:30 UTC |
	| start   | -p newest-cni-398707 --memory=2200 --alsologtostderr   | newest-cni-398707            | jenkins | v1.31.2 | 24 Oct 23 20:30 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:30:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:30:59.796646   55046 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:30:59.796745   55046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:30:59.796749   55046 out.go:309] Setting ErrFile to fd 2...
	I1024 20:30:59.796754   55046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:30:59.796945   55046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:30:59.797536   55046 out.go:303] Setting JSON to false
	I1024 20:30:59.798472   55046 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7758,"bootTime":1698171702,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:30:59.798526   55046 start.go:138] virtualization: kvm guest
	I1024 20:30:59.801242   55046 out.go:177] * [newest-cni-398707] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:30:59.803264   55046 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:30:59.804738   55046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:30:59.803256   55046 notify.go:220] Checking for updates...
	I1024 20:30:59.806408   55046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:30:59.808002   55046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:30:59.809415   55046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:30:59.810764   55046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:30:59.812414   55046 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:30:59.812527   55046 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:30:59.812633   55046 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:30:59.812730   55046 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:30:59.850702   55046 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 20:30:59.852293   55046 start.go:298] selected driver: kvm2
	I1024 20:30:59.852311   55046 start.go:902] validating driver "kvm2" against <nil>
	I1024 20:30:59.852324   55046 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:30:59.853339   55046 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:30:59.853415   55046 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:30:59.868758   55046 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:30:59.868807   55046 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1024 20:30:59.868846   55046 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1024 20:30:59.869084   55046 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1024 20:30:59.869159   55046 cni.go:84] Creating CNI manager for ""
	I1024 20:30:59.869177   55046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:30:59.869190   55046 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 20:30:59.869199   55046 start_flags.go:323] config:
	{Name:newest-cni-398707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-398707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:30:59.869391   55046 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:30:59.871804   55046 out.go:177] * Starting control plane node newest-cni-398707 in cluster newest-cni-398707
	I1024 20:30:59.873223   55046 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:30:59.873271   55046 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 20:30:59.873284   55046 cache.go:57] Caching tarball of preloaded images
	I1024 20:30:59.873401   55046 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:30:59.873416   55046 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 20:30:59.873554   55046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/newest-cni-398707/config.json ...
	I1024 20:30:59.873590   55046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/newest-cni-398707/config.json: {Name:mk080688edc8234ac3572cb68f28e6d145ad4d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:30:59.873752   55046 start.go:365] acquiring machines lock for newest-cni-398707: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:30:59.873786   55046 start.go:369] acquired machines lock for "newest-cni-398707" in 21.347µs
	I1024 20:30:59.873802   55046 start.go:93] Provisioning new machine with config: &{Name:newest-cni-398707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-398707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:30:59.873870   55046 start.go:125] createHost starting for "" (driver="kvm2")
	I1024 20:30:59.875767   55046 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1024 20:30:59.876006   55046 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:30:59.876066   55046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:30:59.891648   55046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I1024 20:30:59.892086   55046 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:30:59.892648   55046 main.go:141] libmachine: Using API Version  1
	I1024 20:30:59.892662   55046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:30:59.892964   55046 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:30:59.893161   55046 main.go:141] libmachine: (newest-cni-398707) Calling .GetMachineName
	I1024 20:30:59.893290   55046 main.go:141] libmachine: (newest-cni-398707) Calling .DriverName
	I1024 20:30:59.893462   55046 start.go:159] libmachine.API.Create for "newest-cni-398707" (driver="kvm2")
	I1024 20:30:59.893494   55046 client.go:168] LocalClient.Create starting
	I1024 20:30:59.893530   55046 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem
	I1024 20:30:59.893575   55046 main.go:141] libmachine: Decoding PEM data...
	I1024 20:30:59.893593   55046 main.go:141] libmachine: Parsing certificate...
	I1024 20:30:59.893645   55046 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem
	I1024 20:30:59.893670   55046 main.go:141] libmachine: Decoding PEM data...
	I1024 20:30:59.893684   55046 main.go:141] libmachine: Parsing certificate...
	I1024 20:30:59.893708   55046 main.go:141] libmachine: Running pre-create checks...
	I1024 20:30:59.893721   55046 main.go:141] libmachine: (newest-cni-398707) Calling .PreCreateCheck
	I1024 20:30:59.893997   55046 main.go:141] libmachine: (newest-cni-398707) Calling .GetConfigRaw
	I1024 20:30:59.894396   55046 main.go:141] libmachine: Creating machine...
	I1024 20:30:59.894411   55046 main.go:141] libmachine: (newest-cni-398707) Calling .Create
	I1024 20:30:59.894520   55046 main.go:141] libmachine: (newest-cni-398707) Creating KVM machine...
	I1024 20:30:59.895723   55046 main.go:141] libmachine: (newest-cni-398707) DBG | found existing default KVM network
	I1024 20:30:59.897417   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:30:59.897241   55069 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029f60}
	I1024 20:30:59.902946   55046 main.go:141] libmachine: (newest-cni-398707) DBG | trying to create private KVM network mk-newest-cni-398707 192.168.39.0/24...
	I1024 20:30:59.977766   55046 main.go:141] libmachine: (newest-cni-398707) DBG | private KVM network mk-newest-cni-398707 192.168.39.0/24 created
	I1024 20:30:59.977798   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:30:59.977751   55069 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:30:59.977827   55046 main.go:141] libmachine: (newest-cni-398707) Setting up store path in /home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707 ...
	I1024 20:30:59.977846   55046 main.go:141] libmachine: (newest-cni-398707) Building disk image from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 20:30:59.977957   55046 main.go:141] libmachine: (newest-cni-398707) Downloading /home/jenkins/minikube-integration/17485-9023/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso...
	I1024 20:31:00.197720   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:00.197597   55069 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707/id_rsa...
	I1024 20:31:00.411178   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:00.411049   55069 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707/newest-cni-398707.rawdisk...
	I1024 20:31:00.411216   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Writing magic tar header
	I1024 20:31:00.411233   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Writing SSH key tar header
	I1024 20:31:00.411318   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:00.411224   55069 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707 ...
	I1024 20:31:00.411387   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707
	I1024 20:31:00.411422   55046 main.go:141] libmachine: (newest-cni-398707) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707 (perms=drwx------)
	I1024 20:31:00.411443   55046 main.go:141] libmachine: (newest-cni-398707) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube/machines (perms=drwxr-xr-x)
	I1024 20:31:00.411454   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube/machines
	I1024 20:31:00.411472   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:31:00.411484   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17485-9023
	I1024 20:31:00.411501   55046 main.go:141] libmachine: (newest-cni-398707) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023/.minikube (perms=drwxr-xr-x)
	I1024 20:31:00.411517   55046 main.go:141] libmachine: (newest-cni-398707) Setting executable bit set on /home/jenkins/minikube-integration/17485-9023 (perms=drwxrwxr-x)
	I1024 20:31:00.411532   55046 main.go:141] libmachine: (newest-cni-398707) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1024 20:31:00.411547   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1024 20:31:00.411564   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home/jenkins
	I1024 20:31:00.411579   55046 main.go:141] libmachine: (newest-cni-398707) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1024 20:31:00.411597   55046 main.go:141] libmachine: (newest-cni-398707) Creating domain...
	I1024 20:31:00.411614   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Checking permissions on dir: /home
	I1024 20:31:00.411628   55046 main.go:141] libmachine: (newest-cni-398707) DBG | Skipping /home - not owner
	I1024 20:31:00.412763   55046 main.go:141] libmachine: (newest-cni-398707) define libvirt domain using xml: 
	I1024 20:31:00.412801   55046 main.go:141] libmachine: (newest-cni-398707) <domain type='kvm'>
	I1024 20:31:00.412815   55046 main.go:141] libmachine: (newest-cni-398707)   <name>newest-cni-398707</name>
	I1024 20:31:00.412825   55046 main.go:141] libmachine: (newest-cni-398707)   <memory unit='MiB'>2200</memory>
	I1024 20:31:00.412835   55046 main.go:141] libmachine: (newest-cni-398707)   <vcpu>2</vcpu>
	I1024 20:31:00.412858   55046 main.go:141] libmachine: (newest-cni-398707)   <features>
	I1024 20:31:00.412872   55046 main.go:141] libmachine: (newest-cni-398707)     <acpi/>
	I1024 20:31:00.412880   55046 main.go:141] libmachine: (newest-cni-398707)     <apic/>
	I1024 20:31:00.412886   55046 main.go:141] libmachine: (newest-cni-398707)     <pae/>
	I1024 20:31:00.412896   55046 main.go:141] libmachine: (newest-cni-398707)     
	I1024 20:31:00.412904   55046 main.go:141] libmachine: (newest-cni-398707)   </features>
	I1024 20:31:00.412912   55046 main.go:141] libmachine: (newest-cni-398707)   <cpu mode='host-passthrough'>
	I1024 20:31:00.412924   55046 main.go:141] libmachine: (newest-cni-398707)   
	I1024 20:31:00.412957   55046 main.go:141] libmachine: (newest-cni-398707)   </cpu>
	I1024 20:31:00.412971   55046 main.go:141] libmachine: (newest-cni-398707)   <os>
	I1024 20:31:00.412986   55046 main.go:141] libmachine: (newest-cni-398707)     <type>hvm</type>
	I1024 20:31:00.412998   55046 main.go:141] libmachine: (newest-cni-398707)     <boot dev='cdrom'/>
	I1024 20:31:00.413004   55046 main.go:141] libmachine: (newest-cni-398707)     <boot dev='hd'/>
	I1024 20:31:00.413011   55046 main.go:141] libmachine: (newest-cni-398707)     <bootmenu enable='no'/>
	I1024 20:31:00.413025   55046 main.go:141] libmachine: (newest-cni-398707)   </os>
	I1024 20:31:00.413039   55046 main.go:141] libmachine: (newest-cni-398707)   <devices>
	I1024 20:31:00.413049   55046 main.go:141] libmachine: (newest-cni-398707)     <disk type='file' device='cdrom'>
	I1024 20:31:00.413068   55046 main.go:141] libmachine: (newest-cni-398707)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707/boot2docker.iso'/>
	I1024 20:31:00.413085   55046 main.go:141] libmachine: (newest-cni-398707)       <target dev='hdc' bus='scsi'/>
	I1024 20:31:00.413098   55046 main.go:141] libmachine: (newest-cni-398707)       <readonly/>
	I1024 20:31:00.413110   55046 main.go:141] libmachine: (newest-cni-398707)     </disk>
	I1024 20:31:00.413148   55046 main.go:141] libmachine: (newest-cni-398707)     <disk type='file' device='disk'>
	I1024 20:31:00.413175   55046 main.go:141] libmachine: (newest-cni-398707)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1024 20:31:00.413196   55046 main.go:141] libmachine: (newest-cni-398707)       <source file='/home/jenkins/minikube-integration/17485-9023/.minikube/machines/newest-cni-398707/newest-cni-398707.rawdisk'/>
	I1024 20:31:00.413210   55046 main.go:141] libmachine: (newest-cni-398707)       <target dev='hda' bus='virtio'/>
	I1024 20:31:00.413223   55046 main.go:141] libmachine: (newest-cni-398707)     </disk>
	I1024 20:31:00.413231   55046 main.go:141] libmachine: (newest-cni-398707)     <interface type='network'>
	I1024 20:31:00.413243   55046 main.go:141] libmachine: (newest-cni-398707)       <source network='mk-newest-cni-398707'/>
	I1024 20:31:00.413257   55046 main.go:141] libmachine: (newest-cni-398707)       <model type='virtio'/>
	I1024 20:31:00.413271   55046 main.go:141] libmachine: (newest-cni-398707)     </interface>
	I1024 20:31:00.413286   55046 main.go:141] libmachine: (newest-cni-398707)     <interface type='network'>
	I1024 20:31:00.413327   55046 main.go:141] libmachine: (newest-cni-398707)       <source network='default'/>
	I1024 20:31:00.413346   55046 main.go:141] libmachine: (newest-cni-398707)       <model type='virtio'/>
	I1024 20:31:00.413360   55046 main.go:141] libmachine: (newest-cni-398707)     </interface>
	I1024 20:31:00.413373   55046 main.go:141] libmachine: (newest-cni-398707)     <serial type='pty'>
	I1024 20:31:00.413388   55046 main.go:141] libmachine: (newest-cni-398707)       <target port='0'/>
	I1024 20:31:00.413406   55046 main.go:141] libmachine: (newest-cni-398707)     </serial>
	I1024 20:31:00.413434   55046 main.go:141] libmachine: (newest-cni-398707)     <console type='pty'>
	I1024 20:31:00.413451   55046 main.go:141] libmachine: (newest-cni-398707)       <target type='serial' port='0'/>
	I1024 20:31:00.413464   55046 main.go:141] libmachine: (newest-cni-398707)     </console>
	I1024 20:31:00.413477   55046 main.go:141] libmachine: (newest-cni-398707)     <rng model='virtio'>
	I1024 20:31:00.413502   55046 main.go:141] libmachine: (newest-cni-398707)       <backend model='random'>/dev/random</backend>
	I1024 20:31:00.413521   55046 main.go:141] libmachine: (newest-cni-398707)     </rng>
	I1024 20:31:00.413542   55046 main.go:141] libmachine: (newest-cni-398707)     
	I1024 20:31:00.413554   55046 main.go:141] libmachine: (newest-cni-398707)     
	I1024 20:31:00.413567   55046 main.go:141] libmachine: (newest-cni-398707)   </devices>
	I1024 20:31:00.413579   55046 main.go:141] libmachine: (newest-cni-398707) </domain>
	I1024 20:31:00.413590   55046 main.go:141] libmachine: (newest-cni-398707) 
	I1024 20:31:00.418060   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:b7:45:9a in network default
	I1024 20:31:00.418836   55046 main.go:141] libmachine: (newest-cni-398707) Ensuring networks are active...
	I1024 20:31:00.418874   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:00.419719   55046 main.go:141] libmachine: (newest-cni-398707) Ensuring network default is active
	I1024 20:31:00.420076   55046 main.go:141] libmachine: (newest-cni-398707) Ensuring network mk-newest-cni-398707 is active
	I1024 20:31:00.420743   55046 main.go:141] libmachine: (newest-cni-398707) Getting domain xml...
	I1024 20:31:00.421576   55046 main.go:141] libmachine: (newest-cni-398707) Creating domain...
	I1024 20:31:01.723633   55046 main.go:141] libmachine: (newest-cni-398707) Waiting to get IP...
	I1024 20:31:01.724454   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:01.724856   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:01.724924   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:01.724852   55069 retry.go:31] will retry after 258.139647ms: waiting for machine to come up
	I1024 20:31:01.984386   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:01.984958   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:01.984990   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:01.984911   55069 retry.go:31] will retry after 286.768224ms: waiting for machine to come up
	I1024 20:31:02.273656   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:02.274221   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:02.274247   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:02.274171   55069 retry.go:31] will retry after 365.077814ms: waiting for machine to come up
	I1024 20:31:02.640835   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:02.641352   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:02.641382   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:02.641289   55069 retry.go:31] will retry after 457.45594ms: waiting for machine to come up
	I1024 20:31:03.099892   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:03.100365   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:03.100387   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:03.100317   55069 retry.go:31] will retry after 493.281889ms: waiting for machine to come up
	I1024 20:31:03.594787   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:03.595222   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:03.595255   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:03.595188   55069 retry.go:31] will retry after 903.823955ms: waiting for machine to come up
	I1024 20:31:04.500122   55046 main.go:141] libmachine: (newest-cni-398707) DBG | domain newest-cni-398707 has defined MAC address 52:54:00:80:c3:c6 in network mk-newest-cni-398707
	I1024 20:31:04.500597   55046 main.go:141] libmachine: (newest-cni-398707) DBG | unable to find current IP address of domain newest-cni-398707 in network mk-newest-cni-398707
	I1024 20:31:04.500643   55046 main.go:141] libmachine: (newest-cni-398707) DBG | I1024 20:31:04.500546   55069 retry.go:31] will retry after 721.835496ms: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:12:24 UTC, ends at Tue 2023-10-24 20:31:08 UTC. --
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.608182091Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7a8e5c07-7077-4947-8c31-f3c6da4d5e92,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178400631194992,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:13:12.574270434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-gnn8j,Uid:f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16981784005261629
89,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:13:12.574271573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de2f215587b00a9fe283f547ebfabd5418b0c887d096d0a8fc35f99cabba211e,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-tsfvs,Uid:f601af0f-443c-445c-8198-259cf9015272,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178396629485602,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-tsfvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f601af0f-443c-445c-8198-259cf9015272,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:13:12.5
74268003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&PodSandboxMetadata{Name:kube-proxy-hvphg,Uid:9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178394425743342,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-24T20:13:12.574272662Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:323512c1-2555-419c-b128-47b945f9d24d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178394422271553,Labels:map[string]s
tring{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/
config.seen: 2023-10-24T20:13:12.574269343Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-014826,Uid:785df71b0f57821e3cd5d04047439a03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178386101861042,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 785df71b0f57821e3cd5d04047439a03,kubernetes.io/config.seen: 2023-10-24T20:13:05.555483105Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-014826,Uid:d1cdb7ecf2d6a0a78bf6c144de83
9e50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178386093746500,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.162:2379,kubernetes.io/config.hash: d1cdb7ecf2d6a0a78bf6c144de839e50,kubernetes.io/config.seen: 2023-10-24T20:13:05.555484757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-014826,Uid:297ea18ade8c720921f2e314b05678b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178386084432024,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e314b05678b3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 297ea18ade8c720921f2e314b05678b3,kubernetes.io/config.seen: 2023-10-24T20:13:05.555483991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-014826,Uid:cc0b06526c504aeef792396e56b6c264,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1698178386042822260,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.162:8443,kubernetes.io/config.hash: cc0b06526c504aeef792396e56b6c264,kub
ernetes.io/config.seen: 2023-10-24T20:13:05.555479068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=578818a6-aaa7-42b4-9eab-cee33a04d975 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.608916536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=76c20dfa-3d16-4596-bfe1-6706fa48178f name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.608987810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=76c20dfa-3d16-4596-bfe1-6706fa48178f name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.609170035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=76c20dfa-3d16-4596-bfe1-6706fa48178f name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.609989441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ace6ae6a-dff2-4e2f-a64f-c416ae199646 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.610060423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ace6ae6a-dff2-4e2f-a64f-c416ae199646 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.617188139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=998c42df-3d1b-46bd-9cbd-a50bddd86cde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.617508813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179468617497479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=998c42df-3d1b-46bd-9cbd-a50bddd86cde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.618181494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=485978aa-1839-491d-aa0f-107920efd42a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.618320095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=485978aa-1839-491d-aa0f-107920efd42a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.618835612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=485978aa-1839-491d-aa0f-107920efd42a name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.665074252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c4c2d0d6-0372-45ff-b7da-0d58a49115a1 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.665158779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c4c2d0d6-0372-45ff-b7da-0d58a49115a1 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.667185519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bd8b4f1d-fa4f-4f1d-b2bd-0030fd17f43d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.667526833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179468667505399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=bd8b4f1d-fa4f-4f1d-b2bd-0030fd17f43d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.668380938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=494b1cf2-c316-41e8-ad25-ebe1d1df6699 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.668458115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=494b1cf2-c316-41e8-ad25-ebe1d1df6699 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.668796682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=494b1cf2-c316-41e8-ad25-ebe1d1df6699 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.704565152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=347a66c1-bee1-4e4f-b9a0-bb95a2923929 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.704789642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=347a66c1-bee1-4e4f-b9a0-bb95a2923929 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.708827990Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ba408cb7-590e-4da8-a056-f077cfc5939c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.709251480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179468709231635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ba408cb7-590e-4da8-a056-f077cfc5939c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.710193832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3746528f-d989-4c29-bcc4-8d10a36ee355 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.710244276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3746528f-d989-4c29-bcc4-8d10a36ee355 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:31:08 no-preload-014826 crio[709]: time="2023-10-24 20:31:08.710421020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1698178425843465625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:615a725b971e1534d6675b4ce3c2bfbcf12b2ead175113f6e62bd71b3c80fb51,PodSandboxId:143351ce77884696e7e47359b3f8d32520306badd38d49ff39d3b85c3156e448,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1698178402484252772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a8e5c07-7077-4947-8c31-f3c6da4d5e92,},Annotations:map[string]string{io.kubernetes.container.hash: a91ab45d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8,PodSandboxId:e375bca1f8d8acb45a90a1162cb2fef24b01a4b3691efa5b679e15f93d46860b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1698178401328129782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gnn8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8f83c43-bf4a-452f-96c3-e968aa6cfd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8f1249,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1,PodSandboxId:8f828f4fe169deab811f0ae1a165bf13599341a697ac653a11f5a5026ef5eeaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1698178395002860402,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 323512c1-2555-419c-b128-47b945f9d24d,},Annotations:map[string]string{io.kubernetes.container.hash: 2948eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c,PodSandboxId:1764bdf6a043248d5ce7ad539e44f5bea288797d8097ec2cd882205a5ee75b5d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1698178394979211527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hvphg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a9c3c47-456b-4a
a9-bf59-882cc3d2f3f7,},Annotations:map[string]string{io.kubernetes.container.hash: 84ae6965,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202,PodSandboxId:d059d8d893a6b3a05e86a9bd6721c6846745b4781ed76b8a5480d854c034ba81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1698178387279558750,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297ea18ade8c720921f2e31
4b05678b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b,PodSandboxId:0e2578156817835bf70037d370b98a02feecd82b19de06f4c024e62cb73d26b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1698178387210413493,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cdb7ecf2d6a0a78bf6c144de839e50,},Annotations:map[string]string{io.kubern
etes.container.hash: aa346f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33,PodSandboxId:2b9b47333434fd97edc6ea8efccbfe6d4bad9faaef3b838f55b395ffd002f65c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1698178386860489332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785df71b0f57821e3cd5d04047439a03,},Annotations:ma
p[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32,PodSandboxId:c64448b4c09a0ac1b4df0cf41d913023a90f99a0670b03507254a0abbf03e7e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1698178386511844069,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-014826,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0b06526c504aeef792396e56b6c264,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 69ac14d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3746528f-d989-4c29-bcc4-8d10a36ee355 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d89cb6110d0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       2                   8f828f4fe169d       storage-provisioner
	615a725b971e1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   1                   143351ce77884       busybox
	94df20bf68998       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      17 minutes ago      Running             coredns                   1                   e375bca1f8d8a       coredns-5dd5756b68-gnn8j
	7e817e194cdec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       1                   8f828f4fe169d       storage-provisioner
	bc751572f7c36       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      17 minutes ago      Running             kube-proxy                1                   1764bdf6a0432       kube-proxy-hvphg
	458ce37f1738a       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      18 minutes ago      Running             kube-scheduler            1                   d059d8d893a6b       kube-scheduler-no-preload-014826
	cb13ad95dea1a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      18 minutes ago      Running             etcd                      1                   0e25781568178       etcd-no-preload-014826
	153d53cd79d89       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      18 minutes ago      Running             kube-controller-manager   1                   2b9b47333434f       kube-controller-manager-no-preload-014826
	c440cb516cdfb       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      18 minutes ago      Running             kube-apiserver            1                   c64448b4c09a0       kube-apiserver-no-preload-014826
	
	* 
	* ==> coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51339 - 58575 "HINFO IN 969512186226067403.7834173540402370385. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007987292s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-014826
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-014826
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=no-preload-014826
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_02_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:02:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-014826
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:31:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:29:02 +0000   Tue, 24 Oct 2023 20:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:29:02 +0000   Tue, 24 Oct 2023 20:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:29:02 +0000   Tue, 24 Oct 2023 20:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:29:02 +0000   Tue, 24 Oct 2023 20:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.162
	  Hostname:    no-preload-014826
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a313d69995c3482a9dba11eb665ee614
	  System UUID:                a313d699-95c3-482a-9dba-11eb665ee614
	  Boot ID:                    f6c96220-fb67-4529-bb83-eeb630a3972c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-gnn8j                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-014826                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-014826             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-014826    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-hvphg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-014826             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-tsfvs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-014826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-014826 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-014826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-014826 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-014826 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-014826 event: Registered Node no-preload-014826 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-014826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-014826 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node no-preload-014826 event: Registered Node no-preload-014826 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct24 20:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069873] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.942386] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.660930] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144777] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.614614] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.570430] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.125801] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.151362] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.123736] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.233389] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[Oct24 20:13] systemd-fstab-generator[1268]: Ignoring "noauto" for root device
	[ +15.344562] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] <==
	* {"level":"info","ts":"2023-10-24T20:13:09.365174Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T20:13:09.365365Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"25a84ac227828bb5","initial-advertise-peer-urls":["https://192.168.50.162:2380"],"listen-peer-urls":["https://192.168.50.162:2380"],"advertise-client-urls":["https://192.168.50.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T20:13:09.365415Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T20:13:09.365542Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.162:2380"}
	{"level":"info","ts":"2023-10-24T20:13:09.365566Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.162:2380"}
	{"level":"info","ts":"2023-10-24T20:13:10.980448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T20:13:10.98061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T20:13:10.980766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 received MsgPreVoteResp from 25a84ac227828bb5 at term 2"}
	{"level":"info","ts":"2023-10-24T20:13:10.980807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.980831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 received MsgVoteResp from 25a84ac227828bb5 at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.980858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25a84ac227828bb5 became leader at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.980884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 25a84ac227828bb5 elected leader 25a84ac227828bb5 at term 3"}
	{"level":"info","ts":"2023-10-24T20:13:10.983459Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"25a84ac227828bb5","local-member-attributes":"{Name:no-preload-014826 ClientURLs:[https://192.168.50.162:2379]}","request-path":"/0/members/25a84ac227828bb5/attributes","cluster-id":"2de4c6e2c9b44383","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T20:13:10.983473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:13:10.983795Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T20:13:10.983836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T20:13:10.983503Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:13:10.985084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T20:13:10.98579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.162:2379"}
	{"level":"info","ts":"2023-10-24T20:23:11.017524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":871}
	{"level":"info","ts":"2023-10-24T20:23:11.020739Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":871,"took":"2.847172ms","hash":2526692423}
	{"level":"info","ts":"2023-10-24T20:23:11.020802Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2526692423,"revision":871,"compact-revision":-1}
	{"level":"info","ts":"2023-10-24T20:28:11.026102Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1113}
	{"level":"info","ts":"2023-10-24T20:28:11.027774Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1113,"took":"1.3437ms","hash":1356215658}
	{"level":"info","ts":"2023-10-24T20:28:11.027833Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1356215658,"revision":1113,"compact-revision":871}
	
	* 
	* ==> kernel <==
	*  20:31:09 up 18 min,  0 users,  load average: 0.21, 0.25, 0.18
	Linux no-preload-014826 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] <==
	* E1024 20:26:13.614405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:26:13.615316       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:27:12.444861       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 20:28:12.445456       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:28:12.616139       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:28:12.616283       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:28:12.616951       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:28:13.617528       1 handler_proxy.go:93] no RequestInfo found in the context
	W1024 20:28:13.617600       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:28:13.617777       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:28:13.617813       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1024 20:28:13.617923       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:28:13.619269       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:29:12.445489       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1024 20:29:13.618039       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:29:13.618155       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1024 20:29:13.618187       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1024 20:29:13.620308       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 20:29:13.620386       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:29:13.620407       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:30:12.445136       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] <==
	* I1024 20:25:25.640718       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:25:55.129206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:25:55.650320       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:26:25.135999       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:26:25.659319       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:26:55.142024       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:26:55.667482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:27:25.149381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:27:25.677118       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:27:55.156460       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:27:55.687435       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:28:25.163962       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:28:25.696495       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:28:55.170037       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:28:55.706498       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:29:17.624852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="334.322µs"
	E1024 20:29:25.176861       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:29:25.715262       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1024 20:29:29.621500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="119.453µs"
	E1024 20:29:55.183852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:29:55.725158       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:30:25.190799       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:30:25.734536       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1024 20:30:55.197163       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1024 20:30:55.747818       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] <==
	* I1024 20:13:15.212378       1 server_others.go:69] "Using iptables proxy"
	I1024 20:13:15.223122       1 node.go:141] Successfully retrieved node IP: 192.168.50.162
	I1024 20:13:15.265823       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1024 20:13:15.265882       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1024 20:13:15.269409       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:13:15.269489       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:13:15.269925       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:13:15.269977       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:13:15.271151       1 config.go:188] "Starting service config controller"
	I1024 20:13:15.271211       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:13:15.271240       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:13:15.271246       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:13:15.271977       1 config.go:315] "Starting node config controller"
	I1024 20:13:15.272031       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:13:15.371403       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:13:15.371523       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:13:15.372122       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] <==
	* I1024 20:13:09.715128       1 serving.go:348] Generated self-signed cert in-memory
	W1024 20:13:12.555392       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 20:13:12.555519       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 20:13:12.555533       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 20:13:12.555541       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 20:13:12.611712       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 20:13:12.611756       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:13:12.614485       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 20:13:12.614543       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 20:13:12.617356       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 20:13:12.617507       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 20:13:12.715058       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:12:24 UTC, ends at Tue 2023-10-24 20:31:09 UTC. --
	Oct 24 20:29:03 no-preload-014826 kubelet[1274]: E1024 20:29:03.618862    1274 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:29:03 no-preload-014826 kubelet[1274]: E1024 20:29:03.618937    1274 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 24 20:29:03 no-preload-014826 kubelet[1274]: E1024 20:29:03.619147    1274 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lmrsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-tsfvs_kube-system(f601af0f-443c-445c-8198-259cf9015272): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:29:03 no-preload-014826 kubelet[1274]: E1024 20:29:03.619194    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:29:05 no-preload-014826 kubelet[1274]: E1024 20:29:05.628205    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:29:05 no-preload-014826 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:29:05 no-preload-014826 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:29:05 no-preload-014826 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:29:17 no-preload-014826 kubelet[1274]: E1024 20:29:17.604501    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:29:29 no-preload-014826 kubelet[1274]: E1024 20:29:29.603515    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:29:42 no-preload-014826 kubelet[1274]: E1024 20:29:42.603068    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:29:57 no-preload-014826 kubelet[1274]: E1024 20:29:57.603352    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:30:05 no-preload-014826 kubelet[1274]: E1024 20:30:05.630777    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:30:05 no-preload-014826 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:30:05 no-preload-014826 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:30:05 no-preload-014826 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 24 20:30:08 no-preload-014826 kubelet[1274]: E1024 20:30:08.603929    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:30:22 no-preload-014826 kubelet[1274]: E1024 20:30:22.602983    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:30:35 no-preload-014826 kubelet[1274]: E1024 20:30:35.604605    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:30:50 no-preload-014826 kubelet[1274]: E1024 20:30:50.603228    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:31:05 no-preload-014826 kubelet[1274]: E1024 20:31:05.603831    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tsfvs" podUID="f601af0f-443c-445c-8198-259cf9015272"
	Oct 24 20:31:05 no-preload-014826 kubelet[1274]: E1024 20:31:05.628522    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 24 20:31:05 no-preload-014826 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 24 20:31:05 no-preload-014826 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 24 20:31:05 no-preload-014826 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] <==
	* I1024 20:13:45.981835       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:13:46.002048       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:13:46.002129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:14:03.408423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:14:03.408898       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-014826_54aa25c2-eba0-4c08-953b-3098a3702b2c!
	I1024 20:14:03.413355       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02c1f45f-0c51-43a7-ac75-c7a0932ce4e8", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-014826_54aa25c2-eba0-4c08-953b-3098a3702b2c became leader
	I1024 20:14:03.512020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-014826_54aa25c2-eba0-4c08-953b-3098a3702b2c!
	
	* 
	* ==> storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] <==
	* I1024 20:13:15.180950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1024 20:13:45.184772       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-014826 -n no-preload-014826
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-014826 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tsfvs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-014826 describe pod metrics-server-57f55c9bc5-tsfvs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-014826 describe pod metrics-server-57f55c9bc5-tsfvs: exit status 1 (83.56165ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tsfvs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-014826 describe pod metrics-server-57f55c9bc5-tsfvs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (244.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (153.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467375 -n old-k8s-version-467375
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-24 20:30:56.017744646 +0000 UTC m=+5417.783223645
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-467375 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-467375 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.481µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-467375 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467375 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-467375 logs -n 25: (1.570822348s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-636215                                        | pause-636215                 | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-145190                              | stopped-upgrade-145190       | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:02 UTC | 24 Oct 23 20:03 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-051222                              | cert-expiration-051222       | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-087071 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | disable-driver-mounts-087071                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:05 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-014826             | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-867165            | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC | 24 Oct 23 20:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-643126  | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC | 24 Oct 23 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:05 UTC |                     |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-014826                  | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-014826                                   | no-preload-014826            | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-867165                 | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-867165                                  | embed-certs-867165           | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467375        | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:06 UTC | 24 Oct 23 20:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-643126       | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-643126 | jenkins | v1.31.2 | 24 Oct 23 20:08 UTC | 24 Oct 23 20:16 UTC |
	|         | default-k8s-diff-port-643126                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467375             | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467375                              | old-k8s-version-467375       | jenkins | v1.31.2 | 24 Oct 23 20:09 UTC | 24 Oct 23 20:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:09:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:09:32.850310   50077 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:09:32.850450   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850462   50077 out.go:309] Setting ErrFile to fd 2...
	I1024 20:09:32.850470   50077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:09:32.850632   50077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 20:09:32.851152   50077 out.go:303] Setting JSON to false
	I1024 20:09:32.851985   50077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6471,"bootTime":1698171702,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 20:09:32.852046   50077 start.go:138] virtualization: kvm guest
	I1024 20:09:32.854420   50077 out.go:177] * [old-k8s-version-467375] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 20:09:32.855945   50077 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:09:32.855955   50077 notify.go:220] Checking for updates...
	I1024 20:09:32.857502   50077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:09:32.858984   50077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:09:32.860444   50077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 20:09:32.861833   50077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 20:09:32.863229   50077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:09:32.864917   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:09:32.865284   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.865345   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.879470   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1024 20:09:32.879865   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.880332   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.880355   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.880731   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.880894   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.882647   50077 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:09:32.884050   50077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:09:32.884316   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:09:32.884351   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:09:32.897671   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I1024 20:09:32.898054   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:09:32.898495   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:09:32.898521   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:09:32.898837   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:09:32.899002   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:09:32.933365   50077 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 20:09:32.934993   50077 start.go:298] selected driver: kvm2
	I1024 20:09:32.935008   50077 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.935100   50077 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:09:32.935713   50077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.935789   50077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 20:09:32.949274   50077 install.go:137] /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1024 20:09:32.949613   50077 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:09:32.949670   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:09:32.949682   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:09:32.949693   50077 start_flags.go:323] config:
	{Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:09:32.949823   50077 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:09:32.951734   50077 out.go:177] * Starting control plane node old-k8s-version-467375 in cluster old-k8s-version-467375
	I1024 20:09:31.289529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:32.953102   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:09:32.953131   50077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 20:09:32.953140   50077 cache.go:57] Caching tarball of preloaded images
	I1024 20:09:32.953220   50077 preload.go:174] Found /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 20:09:32.953230   50077 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1024 20:09:32.953361   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:09:32.953531   50077 start.go:365] acquiring machines lock for old-k8s-version-467375: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:09:37.369555   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:40.441571   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:46.521544   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:49.593529   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:55.673497   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:09:58.745605   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:04.825563   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:07.897530   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:13.977541   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:17.049658   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:23.129561   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:26.201528   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:32.281583   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:35.353592   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:41.433570   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:44.505586   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:50.585514   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:53.657506   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:10:59.737620   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:02.809631   49071 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.162:22: connect: no route to host
	I1024 20:11:05.812536   49198 start.go:369] acquired machines lock for "embed-certs-867165" in 4m26.940203259s
	I1024 20:11:05.812584   49198 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:05.812594   49198 fix.go:54] fixHost starting: 
	I1024 20:11:05.812911   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:05.812959   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:05.827853   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I1024 20:11:05.828400   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:05.828896   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:05.828922   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:05.829237   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:05.829432   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:05.829588   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:05.831229   49198 fix.go:102] recreateIfNeeded on embed-certs-867165: state=Stopped err=<nil>
	I1024 20:11:05.831249   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	W1024 20:11:05.831407   49198 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:05.833007   49198 out.go:177] * Restarting existing kvm2 VM for "embed-certs-867165" ...
	I1024 20:11:05.810496   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:05.810546   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:11:05.812388   49071 machine.go:91] provisioned docker machine in 4m37.419019216s
	I1024 20:11:05.812422   49071 fix.go:56] fixHost completed within 4m37.4383256s
	I1024 20:11:05.812427   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 4m37.438344867s
	W1024 20:11:05.812453   49071 start.go:691] error starting host: provision: host is not running
	W1024 20:11:05.812538   49071 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1024 20:11:05.812551   49071 start.go:706] Will try again in 5 seconds ...
	I1024 20:11:05.834235   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Start
	I1024 20:11:05.834397   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring networks are active...
	I1024 20:11:05.835212   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network default is active
	I1024 20:11:05.835540   49198 main.go:141] libmachine: (embed-certs-867165) Ensuring network mk-embed-certs-867165 is active
	I1024 20:11:05.835850   49198 main.go:141] libmachine: (embed-certs-867165) Getting domain xml...
	I1024 20:11:05.836556   49198 main.go:141] libmachine: (embed-certs-867165) Creating domain...
	I1024 20:11:07.054253   49198 main.go:141] libmachine: (embed-certs-867165) Waiting to get IP...
	I1024 20:11:07.055379   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.055819   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.055911   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.055829   50328 retry.go:31] will retry after 212.147571ms: waiting for machine to come up
	I1024 20:11:07.269505   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.269953   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.270002   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.269942   50328 retry.go:31] will retry after 308.705783ms: waiting for machine to come up
	I1024 20:11:07.580602   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:07.581075   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:07.581103   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:07.581041   50328 retry.go:31] will retry after 467.682838ms: waiting for machine to come up
	I1024 20:11:08.050725   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.051121   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.051154   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.051070   50328 retry.go:31] will retry after 399.648518ms: waiting for machine to come up
	I1024 20:11:08.452605   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:08.452968   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:08.452999   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:08.452906   50328 retry.go:31] will retry after 617.165915ms: waiting for machine to come up
	I1024 20:11:10.812763   49071 start.go:365] acquiring machines lock for no-preload-014826: {Name:mk95e83528d9579bfddae7d01a593afd82747411 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 20:11:09.071803   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.072236   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.072268   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.072205   50328 retry.go:31] will retry after 678.895198ms: waiting for machine to come up
	I1024 20:11:09.753179   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:09.753658   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:09.753689   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:09.753600   50328 retry.go:31] will retry after 807.254598ms: waiting for machine to come up
	I1024 20:11:10.562345   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:10.562733   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:10.562761   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:10.562688   50328 retry.go:31] will retry after 921.950476ms: waiting for machine to come up
	I1024 20:11:11.485981   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:11.486498   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:11.486524   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:11.486452   50328 retry.go:31] will retry after 1.56679652s: waiting for machine to come up
	I1024 20:11:13.055209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:13.055638   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:13.055664   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:13.055594   50328 retry.go:31] will retry after 2.296157501s: waiting for machine to come up
	I1024 20:11:15.355156   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:15.355522   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:15.355555   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:15.355460   50328 retry.go:31] will retry after 1.913484523s: waiting for machine to come up
	I1024 20:11:17.270771   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:17.271200   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:17.271237   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:17.271154   50328 retry.go:31] will retry after 2.867410465s: waiting for machine to come up
	I1024 20:11:20.142209   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:20.142651   49198 main.go:141] libmachine: (embed-certs-867165) DBG | unable to find current IP address of domain embed-certs-867165 in network mk-embed-certs-867165
	I1024 20:11:20.142675   49198 main.go:141] libmachine: (embed-certs-867165) DBG | I1024 20:11:20.142603   50328 retry.go:31] will retry after 4.193720328s: waiting for machine to come up
	I1024 20:11:25.925856   49708 start.go:369] acquired machines lock for "default-k8s-diff-port-643126" in 3m22.313323811s
	I1024 20:11:25.925904   49708 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:25.925911   49708 fix.go:54] fixHost starting: 
	I1024 20:11:25.926296   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:25.926331   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:25.942871   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I1024 20:11:25.943321   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:25.943866   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:11:25.943890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:25.944187   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:25.944359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:25.944510   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:11:25.945833   49708 fix.go:102] recreateIfNeeded on default-k8s-diff-port-643126: state=Stopped err=<nil>
	I1024 20:11:25.945875   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	W1024 20:11:25.946039   49708 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:25.949057   49708 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-643126" ...
	I1024 20:11:24.340353   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.340876   49198 main.go:141] libmachine: (embed-certs-867165) Found IP for machine: 192.168.72.10
	I1024 20:11:24.340899   49198 main.go:141] libmachine: (embed-certs-867165) Reserving static IP address...
	I1024 20:11:24.340912   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has current primary IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.341389   49198 main.go:141] libmachine: (embed-certs-867165) Reserved static IP address: 192.168.72.10
	I1024 20:11:24.341430   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.341453   49198 main.go:141] libmachine: (embed-certs-867165) Waiting for SSH to be available...
	I1024 20:11:24.341482   49198 main.go:141] libmachine: (embed-certs-867165) DBG | skip adding static IP to network mk-embed-certs-867165 - found existing host DHCP lease matching {name: "embed-certs-867165", mac: "52:54:00:59:66:c6", ip: "192.168.72.10"}
	I1024 20:11:24.341500   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Getting to WaitForSSH function...
	I1024 20:11:24.343707   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344021   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.344050   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.344202   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH client type: external
	I1024 20:11:24.344229   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa (-rw-------)
	I1024 20:11:24.344263   49198 main.go:141] libmachine: (embed-certs-867165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:24.344279   49198 main.go:141] libmachine: (embed-certs-867165) DBG | About to run SSH command:
	I1024 20:11:24.344290   49198 main.go:141] libmachine: (embed-certs-867165) DBG | exit 0
	I1024 20:11:24.433113   49198 main.go:141] libmachine: (embed-certs-867165) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:24.433578   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetConfigRaw
	I1024 20:11:24.434267   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.436768   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437149   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.437178   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.437479   49198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/config.json ...
	I1024 20:11:24.437738   49198 machine.go:88] provisioning docker machine ...
	I1024 20:11:24.437760   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:24.438014   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438217   49198 buildroot.go:166] provisioning hostname "embed-certs-867165"
	I1024 20:11:24.438245   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.438431   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.440509   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440861   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.440884   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.440998   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.441155   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441329   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.441499   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.441644   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.441990   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.442009   49198 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-867165 && echo "embed-certs-867165" | sudo tee /etc/hostname
	I1024 20:11:24.570417   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-867165
	
	I1024 20:11:24.570456   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.573010   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573421   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.573446   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.573634   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:24.573845   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574000   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:24.574100   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:24.574296   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:24.574611   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:24.574628   49198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-867165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-867165/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-867165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:24.698255   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:24.698281   49198 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:24.698298   49198 buildroot.go:174] setting up certificates
	I1024 20:11:24.698306   49198 provision.go:83] configureAuth start
	I1024 20:11:24.698317   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetMachineName
	I1024 20:11:24.698624   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:24.701552   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.701900   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.701954   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.702044   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:24.704047   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704389   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:24.704413   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:24.704578   49198 provision.go:138] copyHostCerts
	I1024 20:11:24.704632   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:24.704648   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:24.704713   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:24.704794   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:24.704801   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:24.704828   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:24.704877   49198 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:24.704883   49198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:24.704901   49198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:24.704961   49198 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.embed-certs-867165 san=[192.168.72.10 192.168.72.10 localhost 127.0.0.1 minikube embed-certs-867165]
	I1024 20:11:25.212018   49198 provision.go:172] copyRemoteCerts
	I1024 20:11:25.212075   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:25.212095   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.214791   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215112   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.215141   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.215262   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.215490   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.215682   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.215805   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.301782   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:25.324352   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1024 20:11:25.346349   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:11:25.368012   49198 provision.go:86] duration metric: configureAuth took 669.695412ms
	I1024 20:11:25.368036   49198 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:25.368205   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:25.368269   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.370479   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370739   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.370782   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.370873   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.371063   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371395   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.371593   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.371760   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.372083   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.372098   49198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:25.685250   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:25.685327   49198 machine.go:91] provisioned docker machine in 1.247541762s
	I1024 20:11:25.685347   49198 start.go:300] post-start starting for "embed-certs-867165" (driver="kvm2")
	I1024 20:11:25.685363   49198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:25.685388   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.685781   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:25.685813   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.688378   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688666   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.688712   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.688886   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.689115   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.689274   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.689463   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.775321   49198 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:25.779494   49198 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:25.779516   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:25.779590   49198 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:25.779663   49198 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:25.779748   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:25.788441   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:25.809843   49198 start.go:303] post-start completed in 124.478424ms
	I1024 20:11:25.809946   49198 fix.go:56] fixHost completed within 19.997269664s
	I1024 20:11:25.809985   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.812709   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813101   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.813133   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.813265   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.813464   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813650   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.813819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.813962   49198 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:25.814293   49198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.72.10 22 <nil> <nil>}
	I1024 20:11:25.814309   49198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:25.925691   49198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178285.873274561
	
	I1024 20:11:25.925721   49198 fix.go:206] guest clock: 1698178285.873274561
	I1024 20:11:25.925731   49198 fix.go:219] Guest: 2023-10-24 20:11:25.873274561 +0000 UTC Remote: 2023-10-24 20:11:25.809967209 +0000 UTC m=+287.089115618 (delta=63.307352ms)
	I1024 20:11:25.925760   49198 fix.go:190] guest clock delta is within tolerance: 63.307352ms
	I1024 20:11:25.925767   49198 start.go:83] releasing machines lock for "embed-certs-867165", held for 20.113201351s
	I1024 20:11:25.925801   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.926046   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:25.928979   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929337   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.929369   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.929547   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930171   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:25.930239   49198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:25.930285   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.930332   49198 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:25.930356   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:25.932685   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.932918   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933167   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933197   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933225   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:25.933254   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:25.933377   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933548   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933600   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:25.933758   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:25.933773   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:25.933941   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:25.934075   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:26.046804   49198 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:26.052139   49198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:26.195404   49198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:26.201515   49198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:26.201602   49198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:26.215298   49198 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:26.215312   49198 start.go:472] detecting cgroup driver to use...
	I1024 20:11:26.215375   49198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:26.228683   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:26.240279   49198 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:26.240328   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:26.252314   49198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:26.264748   49198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:26.363370   49198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:26.472219   49198 docker.go:214] disabling docker service ...
	I1024 20:11:26.472293   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:26.485325   49198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:26.497949   49198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:26.614981   49198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:26.731140   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:26.750199   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:26.770158   49198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:26.770224   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.781180   49198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:26.781246   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.791901   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.802435   49198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:26.812848   49198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:26.826330   49198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:26.837268   49198 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:26.837350   49198 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:26.853637   49198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:26.866347   49198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:26.985185   49198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:27.154650   49198 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:27.154718   49198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:27.160801   49198 start.go:540] Will wait 60s for crictl version
	I1024 20:11:27.160848   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:11:27.164920   49198 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:27.202690   49198 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:27.202779   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.250594   49198 ssh_runner.go:195] Run: crio --version
	I1024 20:11:27.296108   49198 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:25.950421   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Start
	I1024 20:11:25.950594   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring networks are active...
	I1024 20:11:25.951296   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network default is active
	I1024 20:11:25.951666   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Ensuring network mk-default-k8s-diff-port-643126 is active
	I1024 20:11:25.952059   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Getting domain xml...
	I1024 20:11:25.952807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Creating domain...
	I1024 20:11:27.231286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting to get IP...
	I1024 20:11:27.232283   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232673   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.232749   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.232677   50444 retry.go:31] will retry after 208.58934ms: waiting for machine to come up
	I1024 20:11:27.443376   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443879   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.443919   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.443821   50444 retry.go:31] will retry after 257.382495ms: waiting for machine to come up
	I1024 20:11:27.703424   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.703968   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:27.704002   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:27.703931   50444 retry.go:31] will retry after 397.047762ms: waiting for machine to come up
	I1024 20:11:28.102593   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103138   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.103169   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.103091   50444 retry.go:31] will retry after 512.560427ms: waiting for machine to come up
	I1024 20:11:27.297540   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetIP
	I1024 20:11:27.300396   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.300799   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:27.300829   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:27.301066   49198 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:27.305045   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:27.320300   49198 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:27.320366   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:27.359702   49198 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:27.359766   49198 ssh_runner.go:195] Run: which lz4
	I1024 20:11:27.363540   49198 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:27.367559   49198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:27.367583   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:28.616845   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617310   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:28.617342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:28.617240   50444 retry.go:31] will retry after 674.554893ms: waiting for machine to come up
	I1024 20:11:29.293139   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293640   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:29.293667   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:29.293603   50444 retry.go:31] will retry after 903.982479ms: waiting for machine to come up
	I1024 20:11:30.199764   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:30.200218   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:30.200119   50444 retry.go:31] will retry after 835.036056ms: waiting for machine to come up
	I1024 20:11:31.037123   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037584   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:31.037609   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:31.037524   50444 retry.go:31] will retry after 1.242617103s: waiting for machine to come up
	I1024 20:11:32.281808   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282287   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:32.282312   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:32.282243   50444 retry.go:31] will retry after 1.694327665s: waiting for machine to come up
	I1024 20:11:29.249631   49198 crio.go:444] Took 1.886122 seconds to copy over tarball
	I1024 20:11:29.249712   49198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:32.249370   49198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.999632152s)
	I1024 20:11:32.249396   49198 crio.go:451] Took 2.999736 seconds to extract the tarball
	I1024 20:11:32.249404   49198 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:32.290929   49198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:32.335293   49198 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:32.335313   49198 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:32.335377   49198 ssh_runner.go:195] Run: crio config
	I1024 20:11:32.394988   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:32.395016   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:32.395039   49198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:32.395066   49198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.10 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-867165 NodeName:embed-certs-867165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:32.395267   49198 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-867165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:32.395363   49198 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-867165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:11:32.395412   49198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:32.408764   49198 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:32.408827   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:32.417504   49198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:11:32.433991   49198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:32.450599   49198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1024 20:11:32.467822   49198 ssh_runner.go:195] Run: grep 192.168.72.10	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:32.471830   49198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:32.485398   49198 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165 for IP: 192.168.72.10
	I1024 20:11:32.485440   49198 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:32.485591   49198 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:32.485627   49198 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:32.485692   49198 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/client.key
	I1024 20:11:32.485751   49198 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key.802f554a
	I1024 20:11:32.485787   49198 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key
	I1024 20:11:32.485883   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:32.485913   49198 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:32.485924   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:32.485946   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:32.485974   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:32.485999   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:32.486054   49198 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:32.486664   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:32.510981   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:32.533691   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:32.556372   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/embed-certs-867165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:32.578805   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:32.601563   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:32.624846   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:32.648498   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:32.672429   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:32.696146   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:32.719078   49198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:32.742894   49198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:32.758998   49198 ssh_runner.go:195] Run: openssl version
	I1024 20:11:32.764797   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:32.774075   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778755   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.778809   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:32.784097   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:32.793365   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:32.802532   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806890   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.806936   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:32.812430   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:32.821767   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:32.830930   49198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835401   49198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.835455   49198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:32.840880   49198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:32.850124   49198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:32.854525   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:32.860161   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:32.866096   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:32.873246   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:32.880430   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:32.887436   49198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:32.892960   49198 kubeadm.go:404] StartCluster: {Name:embed-certs-867165 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:embed-certs-867165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:32.893073   49198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:32.893116   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:32.930748   49198 cri.go:89] found id: ""
	I1024 20:11:32.930817   49198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:32.939716   49198 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:32.939738   49198 kubeadm.go:636] restartCluster start
	I1024 20:11:32.939785   49198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:32.947747   49198 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.948905   49198 kubeconfig.go:92] found "embed-certs-867165" server: "https://192.168.72.10:8443"
	I1024 20:11:32.951235   49198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:32.959165   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.959215   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.970896   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:32.970912   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:32.970957   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:32.980621   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.481345   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:33.481442   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:33.492666   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:33.979087   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979490   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:33.979520   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:33.979433   50444 retry.go:31] will retry after 1.877176786s: waiting for machine to come up
	I1024 20:11:35.859337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:35.859758   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:35.859683   50444 retry.go:31] will retry after 2.235459842s: waiting for machine to come up
	I1024 20:11:38.097481   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:38.097958   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:38.097878   50444 retry.go:31] will retry after 3.083066899s: waiting for machine to come up
	I1024 20:11:33.981370   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.077568   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.088845   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.481489   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.481554   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.492934   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:34.981614   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:34.981744   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:34.993154   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.480679   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.480752   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.492474   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:35.981612   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:35.981703   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:35.992389   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.480877   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.480982   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.492142   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:36.980700   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:36.980784   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:36.992439   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.480962   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.481040   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.492219   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:37.980706   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:37.980814   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:37.992040   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:38.481668   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.481764   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.493319   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.182306   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182647   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | unable to find current IP address of domain default-k8s-diff-port-643126 in network mk-default-k8s-diff-port-643126
	I1024 20:11:41.182674   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | I1024 20:11:41.182602   50444 retry.go:31] will retry after 3.348794863s: waiting for machine to come up
	I1024 20:11:38.981418   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:38.981504   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:38.992810   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.481357   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.481448   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.492521   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:39.981019   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:39.981109   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:39.992766   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.481341   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.481404   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.492180   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:40.981106   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:40.981205   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:40.991931   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.481563   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.481629   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.492601   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:41.981132   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:41.981226   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:41.992061   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.481647   49198 api_server.go:166] Checking apiserver status ...
	I1024 20:11:42.481713   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:42.492524   49198 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:42.960175   49198 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:11:42.960230   49198 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:11:42.960243   49198 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:11:42.960322   49198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:42.998685   49198 cri.go:89] found id: ""
	I1024 20:11:42.998794   49198 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:11:43.013829   49198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:11:43.023081   49198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:11:43.023161   49198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032165   49198 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:11:43.032189   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:43.148027   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:45.942484   50077 start.go:369] acquired machines lock for "old-k8s-version-467375" in 2m12.988914754s
	I1024 20:11:45.942540   50077 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:11:45.942548   50077 fix.go:54] fixHost starting: 
	I1024 20:11:45.942969   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:45.943007   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:45.960424   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I1024 20:11:45.960851   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:45.961468   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:11:45.961498   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:45.961852   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:45.962045   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:11:45.962231   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:11:45.963803   50077 fix.go:102] recreateIfNeeded on old-k8s-version-467375: state=Stopped err=<nil>
	I1024 20:11:45.963841   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	W1024 20:11:45.964018   50077 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:11:45.965809   50077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467375" ...
	I1024 20:11:44.535120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535710   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Found IP for machine: 192.168.61.148
	I1024 20:11:44.535735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has current primary IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.535742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserving static IP address...
	I1024 20:11:44.536160   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Reserved static IP address: 192.168.61.148
	I1024 20:11:44.536181   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Waiting for SSH to be available...
	I1024 20:11:44.536196   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.536225   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | skip adding static IP to network mk-default-k8s-diff-port-643126 - found existing host DHCP lease matching {name: "default-k8s-diff-port-643126", mac: "52:54:00:9d:a9:b2", ip: "192.168.61.148"}
	I1024 20:11:44.536247   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Getting to WaitForSSH function...
	I1024 20:11:44.538297   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538634   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.538669   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.538819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH client type: external
	I1024 20:11:44.538846   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa (-rw-------)
	I1024 20:11:44.538897   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:11:44.538935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | About to run SSH command:
	I1024 20:11:44.538947   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | exit 0
	I1024 20:11:44.629136   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | SSH cmd err, output: <nil>: 
	I1024 20:11:44.629505   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetConfigRaw
	I1024 20:11:44.630190   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.632462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.632782   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.632807   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.633035   49708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/config.json ...
	I1024 20:11:44.633215   49708 machine.go:88] provisioning docker machine ...
	I1024 20:11:44.633231   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:44.633416   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633566   49708 buildroot.go:166] provisioning hostname "default-k8s-diff-port-643126"
	I1024 20:11:44.633580   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.633778   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.635853   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636184   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.636217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.636295   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.636462   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636608   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.636742   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.636890   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.637307   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.637328   49708 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643126 && echo "default-k8s-diff-port-643126" | sudo tee /etc/hostname
	I1024 20:11:44.775436   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643126
	
	I1024 20:11:44.775468   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.778835   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779280   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.779316   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.779494   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:44.779679   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:44.779933   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:44.780147   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:44.780489   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:44.780518   49708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643126/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:11:44.921274   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:11:44.921332   49708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:11:44.921368   49708 buildroot.go:174] setting up certificates
	I1024 20:11:44.921385   49708 provision.go:83] configureAuth start
	I1024 20:11:44.921404   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetMachineName
	I1024 20:11:44.921747   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:44.924977   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925413   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.925443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.925641   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:44.928106   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928443   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:44.928484   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:44.928617   49708 provision.go:138] copyHostCerts
	I1024 20:11:44.928680   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:11:44.928703   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:11:44.928772   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:11:44.928918   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:11:44.928935   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:11:44.928969   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:11:44.929052   49708 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:11:44.929063   49708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:11:44.929089   49708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:11:44.929157   49708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643126 san=[192.168.61.148 192.168.61.148 localhost 127.0.0.1 minikube default-k8s-diff-port-643126]
	I1024 20:11:45.170614   49708 provision.go:172] copyRemoteCerts
	I1024 20:11:45.170679   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:11:45.170706   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.173876   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.174251   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.174522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.174744   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.174909   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.175033   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.266012   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 20:11:45.294626   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:11:45.323773   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:11:45.347515   49708 provision.go:86] duration metric: configureAuth took 426.107365ms
	I1024 20:11:45.347536   49708 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:11:45.347741   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:45.347830   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.351151   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.351560   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.351729   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.351898   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.352359   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.352540   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.353017   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.353045   49708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:11:45.673767   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:11:45.673797   49708 machine.go:91] provisioned docker machine in 1.04057128s
	I1024 20:11:45.673809   49708 start.go:300] post-start starting for "default-k8s-diff-port-643126" (driver="kvm2")
	I1024 20:11:45.673821   49708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:11:45.673844   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.674180   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:11:45.674213   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.677192   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677621   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.677663   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.677817   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.678021   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.678180   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.678322   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.769507   49708 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:11:45.774136   49708 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:11:45.774161   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:11:45.774240   49708 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:11:45.774333   49708 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:11:45.774456   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:11:45.782941   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:45.806536   49708 start.go:303] post-start completed in 132.710109ms
	I1024 20:11:45.806565   49708 fix.go:56] fixHost completed within 19.880653804s
	I1024 20:11:45.806602   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.809496   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.809854   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.809892   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.810096   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.810335   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.810697   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.810870   49708 main.go:141] libmachine: Using SSH client type: native
	I1024 20:11:45.811297   49708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.148 22 <nil> <nil>}
	I1024 20:11:45.811312   49708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:11:45.942309   49708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178305.886866858
	
	I1024 20:11:45.942334   49708 fix.go:206] guest clock: 1698178305.886866858
	I1024 20:11:45.942343   49708 fix.go:219] Guest: 2023-10-24 20:11:45.886866858 +0000 UTC Remote: 2023-10-24 20:11:45.806569839 +0000 UTC m=+222.349889294 (delta=80.297019ms)
	I1024 20:11:45.942388   49708 fix.go:190] guest clock delta is within tolerance: 80.297019ms
	I1024 20:11:45.942399   49708 start.go:83] releasing machines lock for "default-k8s-diff-port-643126", held for 20.016514097s
	I1024 20:11:45.942428   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.942819   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:45.946079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946507   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.946548   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.946681   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947120   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947286   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:11:45.947353   49708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:11:45.947411   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.947564   49708 ssh_runner.go:195] Run: cat /version.json
	I1024 20:11:45.947591   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:11:45.950504   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.950930   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.950984   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951010   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951176   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951342   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.951499   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:45.951522   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.951526   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:45.951638   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:45.951793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:11:45.951946   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:11:45.952178   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:11:45.952345   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:11:46.043544   49708 ssh_runner.go:195] Run: systemctl --version
	I1024 20:11:46.072510   49708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:11:46.230010   49708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:11:46.237538   49708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:11:46.237608   49708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:11:46.259449   49708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:11:46.259468   49708 start.go:472] detecting cgroup driver to use...
	I1024 20:11:46.259530   49708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:11:46.278708   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:11:46.292769   49708 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:11:46.292827   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:11:46.311808   49708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:11:46.329420   49708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:11:46.452375   49708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:11:46.584041   49708 docker.go:214] disabling docker service ...
	I1024 20:11:46.584114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:11:46.606114   49708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:11:46.623302   49708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:11:46.732771   49708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:11:46.862687   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:11:46.879573   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:11:46.900885   49708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:11:46.900955   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.911441   49708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:11:46.911500   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.921674   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.931937   49708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:11:46.942104   49708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:11:46.952610   49708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:11:46.961808   49708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:11:46.961884   49708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:11:46.977789   49708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:11:46.990089   49708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:11:47.130248   49708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:11:47.307336   49708 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:11:47.307402   49708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:11:47.316743   49708 start.go:540] Will wait 60s for crictl version
	I1024 20:11:47.316795   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:11:47.321526   49708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:11:47.369079   49708 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:11:47.369169   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.419428   49708 ssh_runner.go:195] Run: crio --version
	I1024 20:11:47.477016   49708 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:11:45.967071   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Start
	I1024 20:11:45.967249   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring networks are active...
	I1024 20:11:45.967957   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network default is active
	I1024 20:11:45.968324   50077 main.go:141] libmachine: (old-k8s-version-467375) Ensuring network mk-old-k8s-version-467375 is active
	I1024 20:11:45.968743   50077 main.go:141] libmachine: (old-k8s-version-467375) Getting domain xml...
	I1024 20:11:45.969525   50077 main.go:141] libmachine: (old-k8s-version-467375) Creating domain...
	I1024 20:11:47.346548   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting to get IP...
	I1024 20:11:47.347505   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.347894   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.347980   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.347887   50579 retry.go:31] will retry after 232.244798ms: waiting for machine to come up
	I1024 20:11:47.581582   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.582087   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.582118   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.582044   50579 retry.go:31] will retry after 319.930019ms: waiting for machine to come up
	I1024 20:11:47.478565   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetIP
	I1024 20:11:47.481659   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482040   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:11:47.482066   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:11:47.482265   49708 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1024 20:11:47.487054   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:47.499693   49708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:11:47.499765   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:47.551897   49708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:11:47.551978   49708 ssh_runner.go:195] Run: which lz4
	I1024 20:11:47.557026   49708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:11:47.562364   49708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:11:47.562393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1024 20:11:43.852350   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.048386   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.117774   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:44.202966   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:11:44.203042   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.215680   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:44.726471   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.226100   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:45.726494   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.226510   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.726607   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:11:46.758294   49198 api_server.go:72] duration metric: took 2.555329199s to wait for apiserver process to appear ...
	I1024 20:11:46.758319   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:11:46.758339   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.758872   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:46.758909   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:46.759318   49198 api_server.go:269] stopped: https://192.168.72.10:8443/healthz: Get "https://192.168.72.10:8443/healthz": dial tcp 192.168.72.10:8443: connect: connection refused
	I1024 20:11:47.260047   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.910793   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.910830   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:50.910852   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:50.943069   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:11:50.943100   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:11:51.259498   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.265278   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.265316   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:51.759494   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:51.767253   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:11:51.767280   49198 api_server.go:103] status: https://192.168.72.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:11:52.259758   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:11:52.265202   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:11:52.277533   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:11:52.277561   49198 api_server.go:131] duration metric: took 5.51923389s to wait for apiserver health ...
	I1024 20:11:52.277572   49198 cni.go:84] Creating CNI manager for ""
	I1024 20:11:52.277580   49198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:52.279542   49198 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:11:47.904065   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:47.904524   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:47.904551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:47.904467   50579 retry.go:31] will retry after 440.170251ms: waiting for machine to come up
	I1024 20:11:48.346206   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.346778   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.346802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.346686   50579 retry.go:31] will retry after 472.001777ms: waiting for machine to come up
	I1024 20:11:48.820100   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:48.820625   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:48.820660   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:48.820533   50579 retry.go:31] will retry after 487.055032ms: waiting for machine to come up
	I1024 20:11:49.309351   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:49.309816   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:49.309836   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:49.309751   50579 retry.go:31] will retry after 945.474211ms: waiting for machine to come up
	I1024 20:11:50.257106   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:50.257611   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:50.257641   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:50.257563   50579 retry.go:31] will retry after 915.312538ms: waiting for machine to come up
	I1024 20:11:51.174245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:51.174832   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:51.174889   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:51.174792   50579 retry.go:31] will retry after 1.09533855s: waiting for machine to come up
	I1024 20:11:52.271604   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:52.272082   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:52.272111   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:52.272041   50579 retry.go:31] will retry after 1.411155014s: waiting for machine to come up
	I1024 20:11:49.517078   49708 crio.go:444] Took 1.960093 seconds to copy over tarball
	I1024 20:11:49.517170   49708 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:11:53.113830   49708 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.596633239s)
	I1024 20:11:53.113858   49708 crio.go:451] Took 3.596755 seconds to extract the tarball
	I1024 20:11:53.113865   49708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:11:53.157476   49708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:11:53.204980   49708 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:11:53.205004   49708 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:11:53.205090   49708 ssh_runner.go:195] Run: crio config
	I1024 20:11:53.264588   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:11:53.264613   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:11:53.264634   49708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:11:53.264662   49708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.148 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643126 NodeName:default-k8s-diff-port-643126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:11:53.264869   49708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.148
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643126"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:11:53.264975   49708 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-643126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1024 20:11:53.265054   49708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:11:53.275886   49708 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:11:53.275982   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:11:53.286132   49708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1024 20:11:53.303735   49708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:11:53.319522   49708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1024 20:11:53.338388   49708 ssh_runner.go:195] Run: grep 192.168.61.148	control-plane.minikube.internal$ /etc/hosts
	I1024 20:11:53.343108   49708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:11:53.355662   49708 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126 for IP: 192.168.61.148
	I1024 20:11:53.355709   49708 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:53.355873   49708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:11:53.355910   49708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:11:53.356023   49708 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.key
	I1024 20:11:53.356086   49708 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key.8ba5a111
	I1024 20:11:53.356122   49708 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key
	I1024 20:11:53.356237   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:11:53.356265   49708 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:11:53.356275   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:11:53.356299   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:11:53.356320   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:11:53.356341   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:11:53.356377   49708 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:11:53.357029   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:11:53.379968   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:11:53.401871   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:11:53.423699   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:11:53.445338   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:11:53.469994   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:11:53.495061   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:11:52.281055   49198 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:11:52.299421   49198 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:11:52.322020   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:11:52.334273   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:11:52.334318   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:11:52.334332   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:11:52.334356   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:11:52.334372   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:11:52.334389   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:11:52.334401   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:11:52.334413   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:11:52.334425   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:11:52.334438   49198 system_pods.go:74] duration metric: took 12.395036ms to wait for pod list to return data ...
	I1024 20:11:52.334450   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:11:52.338486   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:11:52.338518   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:11:52.338530   49198 node_conditions.go:105] duration metric: took 4.073559ms to run NodePressure ...
	I1024 20:11:52.338555   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:11:55.075569   49198 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.736987276s)
	I1024 20:11:55.075611   49198 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080481   49198 kubeadm.go:787] kubelet initialised
	I1024 20:11:55.080508   49198 kubeadm.go:788] duration metric: took 4.884507ms waiting for restarted kubelet to initialise ...
	I1024 20:11:55.080519   49198 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:55.087371   49198 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.092583   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092616   49198 pod_ready.go:81] duration metric: took 5.215308ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.092627   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.092636   49198 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.098518   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098540   49198 pod_ready.go:81] duration metric: took 5.887969ms waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.098551   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "etcd-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.098560   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.103375   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103400   49198 pod_ready.go:81] duration metric: took 4.83092ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.103411   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.103419   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.108416   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108443   49198 pod_ready.go:81] duration metric: took 5.016219ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.108454   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.108462   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.482846   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482873   49198 pod_ready.go:81] duration metric: took 374.401616ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.482885   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-proxy-thkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.482897   49198 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:55.879895   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879922   49198 pod_ready.go:81] duration metric: took 397.016576ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:55.879935   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:55.879947   49198 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:11:56.280405   49198 pod_ready.go:97] node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280445   49198 pod_ready.go:81] duration metric: took 400.488591ms waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:11:56.280464   49198 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-867165" hosting pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:56.280475   49198 pod_ready.go:38] duration metric: took 1.19994252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:11:56.280498   49198 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:11:56.298423   49198 ops.go:34] apiserver oom_adj: -16
	I1024 20:11:56.298445   49198 kubeadm.go:640] restartCluster took 23.358699894s
	I1024 20:11:56.298455   49198 kubeadm.go:406] StartCluster complete in 23.405500606s
	I1024 20:11:56.298474   49198 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.298551   49198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:11:56.300724   49198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:11:56.300999   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:11:56.301104   49198 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:11:56.301193   49198 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-867165"
	I1024 20:11:56.301203   49198 config.go:182] Loaded profile config "embed-certs-867165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:11:56.301216   49198 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-867165"
	W1024 20:11:56.301261   49198 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:11:56.301260   49198 addons.go:69] Setting metrics-server=true in profile "embed-certs-867165"
	I1024 20:11:56.301290   49198 addons.go:69] Setting default-storageclass=true in profile "embed-certs-867165"
	I1024 20:11:56.301312   49198 addons.go:231] Setting addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:56.301315   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	W1024 20:11:56.301328   49198 addons.go:240] addon metrics-server should already be in state true
	I1024 20:11:56.301331   49198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-867165"
	I1024 20:11:56.301418   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.301743   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301744   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301767   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301771   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.301826   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.301867   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.307030   49198 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-867165" context rescaled to 1 replicas
	I1024 20:11:56.307062   49198 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.10 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:11:56.309053   49198 out.go:177] * Verifying Kubernetes components...
	I1024 20:11:56.310743   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:11:56.317523   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1024 20:11:56.317889   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.318430   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.318450   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.318881   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.319437   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.319486   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.320723   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1024 20:11:56.320906   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I1024 20:11:56.321377   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.321491   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.322079   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322107   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322370   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.322389   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.322464   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322770   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.322829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.323410   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.323444   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.326654   49198 addons.go:231] Setting addon default-storageclass=true in "embed-certs-867165"
	W1024 20:11:56.326674   49198 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:11:56.326700   49198 host.go:66] Checking if "embed-certs-867165" exists ...
	I1024 20:11:56.327084   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.327111   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.335811   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I1024 20:11:56.336310   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.336762   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.336774   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.337109   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.337272   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.338868   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.340964   49198 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:11:56.342438   49198 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.342454   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:11:56.342472   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.341955   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I1024 20:11:56.343402   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.344019   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.344038   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.344502   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.344694   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.345753   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346097   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I1024 20:11:56.346367   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.346398   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.346660   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.346666   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.346829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.348534   49198 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:11:53.684729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:53.685093   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:53.685129   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:53.685030   50579 retry.go:31] will retry after 1.793178726s: waiting for machine to come up
	I1024 20:11:55.481150   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:55.481696   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:55.481729   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:55.481639   50579 retry.go:31] will retry after 2.680463816s: waiting for machine to come up
	I1024 20:11:56.347164   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.347192   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.350114   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.350141   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:11:56.350155   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:11:56.350174   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.350270   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.350397   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.350847   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.351478   49198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:11:56.351514   49198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:11:56.354060   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354451   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.354472   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.354625   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.354819   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.354978   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.355161   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.371309   49198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1024 20:11:56.371746   49198 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:11:56.372300   49198 main.go:141] libmachine: Using API Version  1
	I1024 20:11:56.372325   49198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:11:56.372764   49198 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:11:56.372981   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetState
	I1024 20:11:56.374651   49198 main.go:141] libmachine: (embed-certs-867165) Calling .DriverName
	I1024 20:11:56.374894   49198 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.374911   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:11:56.374934   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHHostname
	I1024 20:11:56.377962   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378385   49198 main.go:141] libmachine: (embed-certs-867165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:66:c6", ip: ""} in network mk-embed-certs-867165: {Iface:virbr4 ExpiryTime:2023-10-24 21:11:18 +0000 UTC Type:0 Mac:52:54:00:59:66:c6 Iaid: IPaddr:192.168.72.10 Prefix:24 Hostname:embed-certs-867165 Clientid:01:52:54:00:59:66:c6}
	I1024 20:11:56.378408   49198 main.go:141] libmachine: (embed-certs-867165) DBG | domain embed-certs-867165 has defined IP address 192.168.72.10 and MAC address 52:54:00:59:66:c6 in network mk-embed-certs-867165
	I1024 20:11:56.378585   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHPort
	I1024 20:11:56.378789   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHKeyPath
	I1024 20:11:56.378954   49198 main.go:141] libmachine: (embed-certs-867165) Calling .GetSSHUsername
	I1024 20:11:56.379083   49198 sshutil.go:53] new ssh client: &{IP:192.168.72.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/embed-certs-867165/id_rsa Username:docker}
	I1024 20:11:56.471271   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:11:56.504355   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:11:56.504382   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:11:56.552351   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:11:56.576037   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:11:56.576068   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:11:56.606745   49198 node_ready.go:35] waiting up to 6m0s for node "embed-certs-867165" to be "Ready" ...
	I1024 20:11:56.606772   49198 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:11:56.620862   49198 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:56.620897   49198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:11:56.676519   49198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:11:57.851757   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.380440836s)
	I1024 20:11:57.851814   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851816   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.299429923s)
	I1024 20:11:57.851829   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.851865   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.851882   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852242   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852262   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852272   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852282   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852368   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852412   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852441   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.852467   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.852412   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852537   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852560   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.852814   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.852859   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.852877   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860105   49198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183533543s)
	I1024 20:11:57.860176   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860195   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860492   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860494   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860515   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860526   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.860537   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.860828   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.860857   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.860876   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.860886   49198 addons.go:467] Verifying addon metrics-server=true in "embed-certs-867165"
	I1024 20:11:57.860990   49198 main.go:141] libmachine: Making call to close driver server
	I1024 20:11:57.861011   49198 main.go:141] libmachine: (embed-certs-867165) Calling .Close
	I1024 20:11:57.861220   49198 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:11:57.861227   49198 main.go:141] libmachine: (embed-certs-867165) DBG | Closing plugin on server side
	I1024 20:11:57.861236   49198 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:11:57.864370   49198 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1024 20:11:53.521030   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:11:53.844700   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:11:53.868393   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:11:53.892495   49708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:11:53.916345   49708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:11:53.935576   49708 ssh_runner.go:195] Run: openssl version
	I1024 20:11:53.943066   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:11:53.957325   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.962959   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.963026   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:11:53.969104   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:11:53.980253   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:11:53.990977   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995906   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:53.995992   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:11:54.001847   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:11:54.012635   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:11:54.023490   49708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028300   49708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.028355   49708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:11:54.033965   49708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:11:54.044984   49708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:11:54.049588   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:11:54.055434   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:11:54.061692   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:11:54.068131   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:11:54.074484   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:11:54.080349   49708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:11:54.086499   49708 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-643126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.3 ClusterName:default-k8s-diff-port-643126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:11:54.086598   49708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:11:54.086655   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:11:54.127406   49708 cri.go:89] found id: ""
	I1024 20:11:54.127494   49708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:11:54.137720   49708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:11:54.137743   49708 kubeadm.go:636] restartCluster start
	I1024 20:11:54.137801   49708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:11:54.147925   49708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.149006   49708 kubeconfig.go:92] found "default-k8s-diff-port-643126" server: "https://192.168.61.148:8444"
	I1024 20:11:54.151513   49708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:11:54.162303   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.162371   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.173715   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.173763   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.173816   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.184641   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:54.685342   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:54.685431   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:54.698640   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.185173   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.185284   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.201355   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:55.684814   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:55.684885   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:55.696664   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.185711   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.185795   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.201419   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:56.684932   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:56.685029   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:56.701458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.185009   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.185111   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.201166   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.685654   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:57.685739   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:57.701496   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:58.185022   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.185076   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.197394   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:57.865715   49198 addons.go:502] enable addons completed in 1.564611111s: enabled=[storage-provisioner metrics-server default-storageclass]
	I1024 20:11:58.683275   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:11:58.163942   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:11:58.164342   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:11:58.164369   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:11:58.164308   50579 retry.go:31] will retry after 2.238050336s: waiting for machine to come up
	I1024 20:12:00.403552   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:00.403947   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:00.403975   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:00.403907   50579 retry.go:31] will retry after 3.901299207s: waiting for machine to come up
	I1024 20:11:58.685131   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:58.685225   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:58.700458   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.184854   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.184936   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.200498   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:11:59.685159   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:11:59.685260   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:11:59.698793   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.185350   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.185418   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.200046   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.685255   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:00.685341   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:00.698229   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.185036   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.185105   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.200083   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:01.685617   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:01.685700   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:01.697442   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.184897   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.184980   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.196208   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:02.685769   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:02.685854   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:02.697356   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:03.184898   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.184977   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.196522   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:00.684425   49198 node_ready.go:58] node "embed-certs-867165" has status "Ready":"False"
	I1024 20:12:01.683130   49198 node_ready.go:49] node "embed-certs-867165" has status "Ready":"True"
	I1024 20:12:01.683154   49198 node_ready.go:38] duration metric: took 5.076371929s waiting for node "embed-certs-867165" to be "Ready" ...
	I1024 20:12:01.683162   49198 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:01.689566   49198 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695393   49198 pod_ready.go:92] pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:01.695416   49198 pod_ready.go:81] duration metric: took 5.827696ms waiting for pod "coredns-5dd5756b68-6qq4r" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:01.695427   49198 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:03.712775   49198 pod_ready.go:102] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:04.306338   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:04.306804   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | unable to find current IP address of domain old-k8s-version-467375 in network mk-old-k8s-version-467375
	I1024 20:12:04.306835   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | I1024 20:12:04.306770   50579 retry.go:31] will retry after 5.15211395s: waiting for machine to come up
	I1024 20:12:03.685737   49708 api_server.go:166] Checking apiserver status ...
	I1024 20:12:03.685827   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:03.697510   49708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:04.163385   49708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:04.163416   49708 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:04.163449   49708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:04.163520   49708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:04.209780   49708 cri.go:89] found id: ""
	I1024 20:12:04.209834   49708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:04.226347   49708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:04.235134   49708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:04.235185   49708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243361   49708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:04.243380   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:04.370510   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.461155   49708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090606159s)
	I1024 20:12:05.461192   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.649281   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.742338   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:05.829426   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:05.829494   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:05.841869   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.356907   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:06.856157   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.356140   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:07.856020   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.356129   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:08.382595   49708 api_server.go:72] duration metric: took 2.553177252s to wait for apiserver process to appear ...
	I1024 20:12:08.382622   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:08.382641   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:04.213550   49198 pod_ready.go:92] pod "etcd-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.213573   49198 pod_ready.go:81] duration metric: took 2.518138084s waiting for pod "etcd-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.213585   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218813   49198 pod_ready.go:92] pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.218841   49198 pod_ready.go:81] duration metric: took 5.247061ms waiting for pod "kube-apiserver-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.218855   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224562   49198 pod_ready.go:92] pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.224585   49198 pod_ready.go:81] duration metric: took 5.720637ms waiting for pod "kube-controller-manager-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.224597   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484197   49198 pod_ready.go:92] pod "kube-proxy-thkqr" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.484216   49198 pod_ready.go:81] duration metric: took 259.611869ms waiting for pod "kube-proxy-thkqr" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.484224   49198 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883941   49198 pod_ready.go:92] pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:04.883968   49198 pod_ready.go:81] duration metric: took 399.73679ms waiting for pod "kube-scheduler-embed-certs-867165" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:04.883982   49198 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:07.193414   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:10.878419   49071 start.go:369] acquired machines lock for "no-preload-014826" in 1m0.065559113s
	I1024 20:12:10.878467   49071 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:12:10.878475   49071 fix.go:54] fixHost starting: 
	I1024 20:12:10.878869   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:10.878901   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:10.898307   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I1024 20:12:10.898732   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:10.899250   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:12:10.899268   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:10.899614   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:10.899790   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:10.899933   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:12:10.901569   49071 fix.go:102] recreateIfNeeded on no-preload-014826: state=Stopped err=<nil>
	I1024 20:12:10.901593   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	W1024 20:12:10.901753   49071 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:12:10.904367   49071 out.go:177] * Restarting existing kvm2 VM for "no-preload-014826" ...
	I1024 20:12:09.462373   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.462813   50077 main.go:141] libmachine: (old-k8s-version-467375) Found IP for machine: 192.168.39.71
	I1024 20:12:09.462836   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserving static IP address...
	I1024 20:12:09.462853   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has current primary IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.463385   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.463423   50077 main.go:141] libmachine: (old-k8s-version-467375) Reserved static IP address: 192.168.39.71
	I1024 20:12:09.463442   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | skip adding static IP to network mk-old-k8s-version-467375 - found existing host DHCP lease matching {name: "old-k8s-version-467375", mac: "52:54:00:28:42:97", ip: "192.168.39.71"}
	I1024 20:12:09.463463   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Getting to WaitForSSH function...
	I1024 20:12:09.463484   50077 main.go:141] libmachine: (old-k8s-version-467375) Waiting for SSH to be available...
	I1024 20:12:09.465635   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.465951   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.465979   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.466131   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH client type: external
	I1024 20:12:09.466167   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa (-rw-------)
	I1024 20:12:09.466210   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:09.466227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | About to run SSH command:
	I1024 20:12:09.466256   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | exit 0
	I1024 20:12:09.565274   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:09.565647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetConfigRaw
	I1024 20:12:09.566251   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.569078   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569551   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.569585   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.569863   50077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/config.json ...
	I1024 20:12:09.570097   50077 machine.go:88] provisioning docker machine ...
	I1024 20:12:09.570122   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:09.570355   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570604   50077 buildroot.go:166] provisioning hostname "old-k8s-version-467375"
	I1024 20:12:09.570634   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.570807   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.573170   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573560   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.573587   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.573757   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.573934   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574080   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.574209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.574414   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.574840   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.574858   50077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467375 && echo "old-k8s-version-467375" | sudo tee /etc/hostname
	I1024 20:12:09.718150   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467375
	
	I1024 20:12:09.718201   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.721079   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721461   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.721495   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.721653   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:09.721865   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722016   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:09.722167   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:09.722324   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:09.722712   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:09.722732   50077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467375' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467375/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467375' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:09.865069   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:09.865098   50077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:09.865125   50077 buildroot.go:174] setting up certificates
	I1024 20:12:09.865136   50077 provision.go:83] configureAuth start
	I1024 20:12:09.865151   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetMachineName
	I1024 20:12:09.865449   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:09.868055   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868480   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.868513   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.868693   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:09.870838   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871203   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:09.871227   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:09.871363   50077 provision.go:138] copyHostCerts
	I1024 20:12:09.871411   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:09.871423   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:09.871490   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:09.871613   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:09.871625   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:09.871655   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:09.871743   50077 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:09.871753   50077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:09.871783   50077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:09.871856   50077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467375 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube old-k8s-version-467375]
	I1024 20:12:10.091178   50077 provision.go:172] copyRemoteCerts
	I1024 20:12:10.091229   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:10.091253   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.094245   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094550   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.094590   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.094759   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.094955   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.095123   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.095271   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.192715   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:12:10.216110   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:10.239468   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:12:10.263113   50077 provision.go:86] duration metric: configureAuth took 397.957727ms
	I1024 20:12:10.263138   50077 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:10.263366   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:12:10.263480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.265995   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266293   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.266334   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.266467   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.266696   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.266863   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.267027   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.267168   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.267653   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.267677   50077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:10.596009   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:10.596032   50077 machine.go:91] provisioned docker machine in 1.025920355s
	I1024 20:12:10.596041   50077 start.go:300] post-start starting for "old-k8s-version-467375" (driver="kvm2")
	I1024 20:12:10.596050   50077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:10.596075   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.596415   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:10.596450   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.598886   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599234   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.599259   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.599446   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.599647   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.599812   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.599955   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.697045   50077 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:10.701363   50077 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:10.701387   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:10.701458   50077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:10.701546   50077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:10.701653   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:10.712072   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:10.737471   50077 start.go:303] post-start completed in 141.415073ms
	I1024 20:12:10.737508   50077 fix.go:56] fixHost completed within 24.794946143s
	I1024 20:12:10.737533   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.740438   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.740792   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.740820   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.741024   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.741247   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741428   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.741691   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.741861   50077 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:10.742407   50077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1024 20:12:10.742431   50077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:10.878250   50077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178330.824734287
	
	I1024 20:12:10.878273   50077 fix.go:206] guest clock: 1698178330.824734287
	I1024 20:12:10.878283   50077 fix.go:219] Guest: 2023-10-24 20:12:10.824734287 +0000 UTC Remote: 2023-10-24 20:12:10.737513672 +0000 UTC m=+157.935911605 (delta=87.220615ms)
	I1024 20:12:10.878307   50077 fix.go:190] guest clock delta is within tolerance: 87.220615ms
	I1024 20:12:10.878314   50077 start.go:83] releasing machines lock for "old-k8s-version-467375", held for 24.935800385s
	I1024 20:12:10.878347   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.878614   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:10.881335   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881746   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.881784   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.881933   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882442   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:12:10.882741   50077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:10.882801   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.882860   50077 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:10.882886   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:12:10.885640   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.885856   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886047   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886070   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886209   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886276   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:10.886315   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:10.886383   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886439   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:12:10.886535   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886579   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:12:10.886683   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:12:10.886699   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:10.886816   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:12:11.006700   50077 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:11.012734   50077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:11.162399   50077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:11.169673   50077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:11.169751   50077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:11.184770   50077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:11.184794   50077 start.go:472] detecting cgroup driver to use...
	I1024 20:12:11.184858   50077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:11.202317   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:11.218122   50077 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:11.218187   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:11.233177   50077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:11.247591   50077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:11.387195   50077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:11.520544   50077 docker.go:214] disabling docker service ...
	I1024 20:12:11.520615   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:11.539166   50077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:11.552957   50077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:11.710494   50077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:11.837532   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:11.854418   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:11.874953   50077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1024 20:12:11.875040   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.887115   50077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:11.887206   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.898994   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.908652   50077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:11.918280   50077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:11.930870   50077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:11.939522   50077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:11.939580   50077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:11.955005   50077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:11.965173   50077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:12.098480   50077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:12.296897   50077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:12.296993   50077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:12.302906   50077 start.go:540] Will wait 60s for crictl version
	I1024 20:12:12.302956   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:12.307142   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:12.353253   50077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:12.353369   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.417241   50077 ssh_runner.go:195] Run: crio --version
	I1024 20:12:12.486375   50077 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1024 20:12:12.487819   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetIP
	I1024 20:12:12.491366   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.491830   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:12:12.491862   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:12:12.492054   50077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:12.497705   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:12.514116   50077 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 20:12:12.514208   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:12.569171   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:12.569247   50077 ssh_runner.go:195] Run: which lz4
	I1024 20:12:12.574729   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 20:12:12.579319   50077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 20:12:12.579364   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 20:12:10.905856   49071 main.go:141] libmachine: (no-preload-014826) Calling .Start
	I1024 20:12:10.906027   49071 main.go:141] libmachine: (no-preload-014826) Ensuring networks are active...
	I1024 20:12:10.906761   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network default is active
	I1024 20:12:10.907112   49071 main.go:141] libmachine: (no-preload-014826) Ensuring network mk-no-preload-014826 is active
	I1024 20:12:10.907486   49071 main.go:141] libmachine: (no-preload-014826) Getting domain xml...
	I1024 20:12:10.908225   49071 main.go:141] libmachine: (no-preload-014826) Creating domain...
	I1024 20:12:12.324832   49071 main.go:141] libmachine: (no-preload-014826) Waiting to get IP...
	I1024 20:12:12.326055   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.326595   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.326695   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.326594   50821 retry.go:31] will retry after 197.462386ms: waiting for machine to come up
	I1024 20:12:12.526293   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.526743   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.526774   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.526720   50821 retry.go:31] will retry after 271.486585ms: waiting for machine to come up
	I1024 20:12:12.800360   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:12.801756   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:12.801940   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:12.801863   50821 retry.go:31] will retry after 486.882671ms: waiting for machine to come up
	I1024 20:12:12.479397   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.479431   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.479445   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:12.490441   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:12.490470   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:12.990764   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.006526   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.006556   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:13.490974   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:13.499731   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:12:13.499764   49708 api_server.go:103] status: https://192.168.61.148:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:12:09.195216   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:11.694410   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.698362   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:13.991467   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:12:14.011775   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:12:14.048756   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:12:14.048791   49708 api_server.go:131] duration metric: took 5.666161032s to wait for apiserver health ...
	I1024 20:12:14.048802   49708 cni.go:84] Creating CNI manager for ""
	I1024 20:12:14.048812   49708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:14.050652   49708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:14.052331   49708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:14.086953   49708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:14.142753   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:14.162085   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:12:14.162211   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:14.162246   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:12:14.162280   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:12:14.162307   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:12:14.162330   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:12:14.162352   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:12:14.162375   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:12:14.162411   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:12:14.162434   49708 system_pods.go:74] duration metric: took 19.657104ms to wait for pod list to return data ...
	I1024 20:12:14.162456   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:14.173042   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:14.173078   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:14.173093   49708 node_conditions.go:105] duration metric: took 10.618815ms to run NodePressure ...
	I1024 20:12:14.173117   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:14.763495   49708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768626   49708 kubeadm.go:787] kubelet initialised
	I1024 20:12:14.768653   49708 kubeadm.go:788] duration metric: took 5.128553ms waiting for restarted kubelet to initialise ...
	I1024 20:12:14.768663   49708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:14.788128   49708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.800546   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800582   49708 pod_ready.go:81] duration metric: took 12.417978ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.800597   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.800610   49708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.808416   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808448   49708 pod_ready.go:81] duration metric: took 7.821099ms waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.808463   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.808472   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.814286   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814317   49708 pod_ready.go:81] duration metric: took 5.833548ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.814331   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.814341   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:14.825548   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825582   49708 pod_ready.go:81] duration metric: took 11.230382ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:14.825596   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:14.825606   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.168279   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168323   49708 pod_ready.go:81] duration metric: took 342.707312ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.168338   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-proxy-x4zbh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.168351   49708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.567697   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567735   49708 pod_ready.go:81] duration metric: took 399.371702ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.567750   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.567838   49708 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:15.967716   49708 pod_ready.go:97] node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967750   49708 pod_ready.go:81] duration metric: took 399.892272ms waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:12:15.967764   49708 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-643126" hosting pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:15.967773   49708 pod_ready.go:38] duration metric: took 1.199098599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:15.967793   49708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:12:15.986399   49708 ops.go:34] apiserver oom_adj: -16
	I1024 20:12:15.986422   49708 kubeadm.go:640] restartCluster took 21.848673162s
	I1024 20:12:15.986430   49708 kubeadm.go:406] StartCluster complete in 21.899940105s
	I1024 20:12:15.986444   49708 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.986545   49708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:12:15.989108   49708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:15.989647   49708 config.go:182] Loaded profile config "default-k8s-diff-port-643126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:15.989617   49708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:12:15.989715   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:12:15.989719   49708 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989736   49708 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-643126"
	W1024 20:12:15.989752   49708 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:12:15.989752   49708 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989775   49708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643126"
	I1024 20:12:15.989786   49708 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-643126"
	I1024 20:12:15.989802   49708 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:15.989804   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	W1024 20:12:15.989809   49708 addons.go:240] addon metrics-server should already be in state true
	I1024 20:12:15.989849   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:15.990183   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990192   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990246   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990294   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.990209   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:15.990327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:15.995810   49708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-643126" context rescaled to 1 replicas
	I1024 20:12:15.995838   49708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.148 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:12:15.998001   49708 out.go:177] * Verifying Kubernetes components...
	I1024 20:12:16.001589   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:12:16.010690   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I1024 20:12:16.011310   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.011861   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.011890   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.012279   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.012906   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.012960   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.013706   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1024 20:12:16.014057   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.014533   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.014560   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.014905   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.015330   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I1024 20:12:16.015444   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.015486   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.015703   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.016168   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.016188   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.016591   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.016763   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.020428   49708 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-643126"
	W1024 20:12:16.020448   49708 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:12:16.020474   49708 host.go:66] Checking if "default-k8s-diff-port-643126" exists ...
	I1024 20:12:16.020840   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.020873   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.031538   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I1024 20:12:16.033822   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.034350   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.034367   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.034746   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.034802   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I1024 20:12:16.034978   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.035073   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.035525   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.035549   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.035943   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.036217   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.036694   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.038891   49708 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:12:16.037871   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.040815   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:12:16.040832   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:12:16.040851   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.042238   49708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:14.393634   50077 crio.go:444] Took 1.818945 seconds to copy over tarball
	I1024 20:12:14.393720   50077 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 20:12:17.795931   50077 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.402175992s)
	I1024 20:12:17.795962   50077 crio.go:451] Took 3.402303 seconds to extract the tarball
	I1024 20:12:17.795974   50077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 20:12:17.841100   50077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:16.043742   49708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.043758   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:12:16.043775   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.046924   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.047003   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047035   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.047068   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.047224   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.049392   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049433   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.049469   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.049487   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I1024 20:12:16.049492   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.049976   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.050488   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.050502   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.050534   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.050712   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.050810   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.050844   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.050974   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.051292   49708 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:12:16.051327   49708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:12:16.051585   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.067412   49708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I1024 20:12:16.067810   49708 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:12:16.068428   49708 main.go:141] libmachine: Using API Version  1
	I1024 20:12:16.068445   49708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:12:16.068991   49708 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:12:16.069222   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetState
	I1024 20:12:16.070923   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .DriverName
	I1024 20:12:16.071196   49708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.071219   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:12:16.071238   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHHostname
	I1024 20:12:16.074735   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075400   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a9:b2", ip: ""} in network mk-default-k8s-diff-port-643126: {Iface:virbr3 ExpiryTime:2023-10-24 21:11:38 +0000 UTC Type:0 Mac:52:54:00:9d:a9:b2 Iaid: IPaddr:192.168.61.148 Prefix:24 Hostname:default-k8s-diff-port-643126 Clientid:01:52:54:00:9d:a9:b2}
	I1024 20:12:16.075431   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | domain default-k8s-diff-port-643126 has defined IP address 192.168.61.148 and MAC address 52:54:00:9d:a9:b2 in network mk-default-k8s-diff-port-643126
	I1024 20:12:16.075630   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHPort
	I1024 20:12:16.075796   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHKeyPath
	I1024 20:12:16.075935   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .GetSSHUsername
	I1024 20:12:16.076097   49708 sshutil.go:53] new ssh client: &{IP:192.168.61.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/default-k8s-diff-port-643126/id_rsa Username:docker}
	I1024 20:12:16.201177   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:12:16.201198   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:12:16.224757   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:12:16.247200   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:12:16.247225   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:12:16.259476   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:12:16.324327   49708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.324354   49708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:12:16.371331   49708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:12:16.384042   49708 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:16.384367   49708 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:12:17.654459   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.429657283s)
	I1024 20:12:17.654516   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.654529   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.654951   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.654978   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.654990   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.655004   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.655016   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.655330   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.655353   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:17.672310   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:17.672337   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:17.672693   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:17.672738   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:17.672761   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.138719   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.879209719s)
	I1024 20:12:18.138769   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.138783   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139079   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.139091   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139103   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139117   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.139132   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.139322   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.139338   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.139338   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.203722   49708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.832303736s)
	I1024 20:12:18.203776   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.203793   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204088   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204106   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204118   49708 main.go:141] libmachine: Making call to close driver server
	I1024 20:12:18.204128   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) Calling .Close
	I1024 20:12:18.204348   49708 main.go:141] libmachine: (default-k8s-diff-port-643126) DBG | Closing plugin on server side
	I1024 20:12:18.204378   49708 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:12:18.204393   49708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:12:18.204406   49708 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-643126"
	I1024 20:12:13.290974   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.291494   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.291524   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.291402   50821 retry.go:31] will retry after 588.738796ms: waiting for machine to come up
	I1024 20:12:13.882058   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:13.882661   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:13.882685   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:13.882577   50821 retry.go:31] will retry after 626.457323ms: waiting for machine to come up
	I1024 20:12:14.510560   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:14.511120   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:14.511159   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:14.511059   50821 retry.go:31] will retry after 848.521213ms: waiting for machine to come up
	I1024 20:12:15.360917   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:15.361423   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:15.361452   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:15.361397   50821 retry.go:31] will retry after 790.780783ms: waiting for machine to come up
	I1024 20:12:16.153815   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:16.154332   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:16.154364   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:16.154274   50821 retry.go:31] will retry after 1.066691012s: waiting for machine to come up
	I1024 20:12:17.222675   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:17.223280   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:17.223309   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:17.223248   50821 retry.go:31] will retry after 1.657285361s: waiting for machine to come up
	I1024 20:12:18.299768   49708 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1024 20:12:16.196266   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.197531   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:18.397703   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:17.907894   50077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 20:12:18.029064   50077 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:18.029174   50077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.029196   50077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.029209   50077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.029219   50077 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.029403   50077 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 20:12:18.029418   50077 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.029178   50077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.030719   50077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.030726   50077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.030730   50077 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 20:12:18.030748   50077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.030775   50077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.030801   50077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.030972   50077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.031077   50077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.180435   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.182586   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.185966   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1024 20:12:18.190926   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.196636   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.198176   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.205102   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.285789   50077 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1024 20:12:18.285837   50077 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.285889   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.356595   50077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1024 20:12:18.356639   50077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.356678   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.370773   50077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:18.387248   50077 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1024 20:12:18.387295   50077 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.387343   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.387461   50077 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1024 20:12:18.387488   50077 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1024 20:12:18.387530   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400566   50077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1024 20:12:18.400608   50077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.400647   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400660   50077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1024 20:12:18.400705   50077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.400742   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400754   50077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1024 20:12:18.400785   50077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.400812   50077 ssh_runner.go:195] Run: which crictl
	I1024 20:12:18.400845   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1024 20:12:18.400814   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1024 20:12:18.545451   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1024 20:12:18.545541   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1024 20:12:18.545587   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1024 20:12:18.545674   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1024 20:12:18.545724   50077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 20:12:18.545777   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1024 20:12:18.545734   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1024 20:12:18.683462   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 20:12:18.683513   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1024 20:12:18.683578   50077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.683656   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1024 20:12:18.683686   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1024 20:12:18.683732   50077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1024 20:12:18.688916   50077 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1024 20:12:18.688954   50077 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1024 20:12:18.689040   50077 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1024 20:12:20.355824   50077 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.666754363s)
	I1024 20:12:20.355859   50077 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1024 20:12:20.355920   50077 cache_images.go:92] LoadImages completed in 2.326833316s
	W1024 20:12:20.356004   50077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I1024 20:12:20.356080   50077 ssh_runner.go:195] Run: crio config
	I1024 20:12:20.428753   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:20.428775   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:20.428793   50077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:20.428835   50077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467375 NodeName:old-k8s-version-467375 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 20:12:20.429015   50077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467375"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467375
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.71:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:20.429115   50077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467375 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:20.429179   50077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1024 20:12:20.440158   50077 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:20.440239   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:20.450883   50077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1024 20:12:20.470913   50077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:20.490653   50077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1024 20:12:20.510287   50077 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:20.514815   50077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:20.526910   50077 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375 for IP: 192.168.39.71
	I1024 20:12:20.526943   50077 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:20.527172   50077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:20.527227   50077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:20.527313   50077 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.key
	I1024 20:12:20.527401   50077 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key.f4667c0f
	I1024 20:12:20.527458   50077 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key
	I1024 20:12:20.527617   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:20.527658   50077 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:20.527672   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:20.527712   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:20.527768   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:20.527803   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:20.527867   50077 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:20.528563   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:20.561437   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:20.593396   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:20.626812   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 20:12:20.659073   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:20.690934   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:20.723550   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:20.754091   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:20.785078   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:20.813190   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:20.845338   50077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:20.876594   50077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:20.899560   50077 ssh_runner.go:195] Run: openssl version
	I1024 20:12:20.907482   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:20.922776   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929623   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.929693   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:20.935454   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:20.947494   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:20.958906   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964115   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.964177   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:20.970084   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:20.982477   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:20.995317   50077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000479   50077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.000568   50077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:21.006797   50077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:21.020161   50077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:21.025037   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:21.033376   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:21.041858   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:21.050119   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:21.058140   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:21.066151   50077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:21.074299   50077 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467375 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467375 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:21.074409   50077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:21.074454   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:21.125486   50077 cri.go:89] found id: ""
	I1024 20:12:21.125559   50077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:21.139034   50077 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:21.139058   50077 kubeadm.go:636] restartCluster start
	I1024 20:12:21.139113   50077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:21.151994   50077 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.153569   50077 kubeconfig.go:92] found "old-k8s-version-467375" server: "https://192.168.39.71:8443"
	I1024 20:12:21.157114   50077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:21.169908   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.169998   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.186116   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.186138   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.186187   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.201283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:21.702002   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:21.702084   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:21.717499   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.201839   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.201946   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.217814   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:22.702454   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:22.702525   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:22.720944   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:18.882382   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:18.882833   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:18.882869   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:18.882798   50821 retry.go:31] will retry after 1.854607935s: waiting for machine to come up
	I1024 20:12:20.738594   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:20.739327   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:20.739375   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:20.739255   50821 retry.go:31] will retry after 2.774006375s: waiting for machine to come up
	I1024 20:12:18.891092   49708 addons.go:502] enable addons completed in 2.901476764s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1024 20:12:20.898330   49708 node_ready.go:58] node "default-k8s-diff-port-643126" has status "Ready":"False"
	I1024 20:12:22.897985   49708 node_ready.go:49] node "default-k8s-diff-port-643126" has status "Ready":"True"
	I1024 20:12:22.898016   49708 node_ready.go:38] duration metric: took 6.51394456s waiting for node "default-k8s-diff-port-643126" to be "Ready" ...
	I1024 20:12:22.898029   49708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:12:22.907326   49708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915330   49708 pod_ready.go:92] pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:22.915354   49708 pod_ready.go:81] duration metric: took 7.999933ms waiting for pod "coredns-5dd5756b68-mklhw" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:22.915366   49708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:20.698011   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.195726   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:23.201529   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.201620   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.215098   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.701482   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:23.701572   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:23.715481   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.201550   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.201610   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.218008   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:24.701489   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:24.701591   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:24.716960   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.201492   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.201558   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.215972   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:25.701398   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:25.701506   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:25.714016   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.201948   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.202018   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.215403   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:26.701876   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:26.701948   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:26.714598   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.202095   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.202161   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.215728   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:27.702476   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:27.702589   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:27.715925   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:23.514310   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:23.514813   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:23.514850   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:23.514763   50821 retry.go:31] will retry after 3.277478612s: waiting for machine to come up
	I1024 20:12:26.793845   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:26.794291   49071 main.go:141] libmachine: (no-preload-014826) DBG | unable to find current IP address of domain no-preload-014826 in network mk-no-preload-014826
	I1024 20:12:26.794312   49071 main.go:141] libmachine: (no-preload-014826) DBG | I1024 20:12:26.794249   50821 retry.go:31] will retry after 4.518205069s: waiting for machine to come up
	I1024 20:12:24.934951   49708 pod_ready.go:92] pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.934977   49708 pod_ready.go:81] duration metric: took 2.019602232s waiting for pod "etcd-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.934990   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940403   49708 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:24.940424   49708 pod_ready.go:81] duration metric: took 5.425415ms waiting for pod "kube-apiserver-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:24.940437   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805106   49708 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:25.805127   49708 pod_ready.go:81] duration metric: took 864.682784ms waiting for pod "kube-controller-manager-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.805137   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.096987   49708 pod_ready.go:92] pod "kube-proxy-x4zbh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.097025   49708 pod_ready.go:81] duration metric: took 291.86715ms waiting for pod "kube-proxy-x4zbh" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.097040   49708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497404   49708 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace has status "Ready":"True"
	I1024 20:12:26.497425   49708 pod_ready.go:81] duration metric: took 400.376909ms waiting for pod "kube-scheduler-default-k8s-diff-port-643126" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:26.497444   49708 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	I1024 20:12:25.694439   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.192955   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:28.201919   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.201990   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.215407   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:28.701578   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:28.701658   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:28.714135   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.202433   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.202553   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.214936   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:29.702439   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:29.702499   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:29.714852   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.202428   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.202500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.214283   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:30.702441   50077 api_server.go:166] Checking apiserver status ...
	I1024 20:12:30.702500   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:30.715562   50077 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:31.170652   50077 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:12:31.170682   50077 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:12:31.170693   50077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:12:31.170772   50077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:31.231971   50077 cri.go:89] found id: ""
	I1024 20:12:31.232068   50077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:12:31.249451   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:12:31.261057   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:12:31.261124   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270878   50077 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:12:31.270901   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:31.407803   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.357283   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.567466   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.659297   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:32.745553   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:12:32.745629   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:32.761052   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:31.314269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314887   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has current primary IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.314912   49071 main.go:141] libmachine: (no-preload-014826) Found IP for machine: 192.168.50.162
	I1024 20:12:31.314926   49071 main.go:141] libmachine: (no-preload-014826) Reserving static IP address...
	I1024 20:12:31.315396   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.315434   49071 main.go:141] libmachine: (no-preload-014826) DBG | skip adding static IP to network mk-no-preload-014826 - found existing host DHCP lease matching {name: "no-preload-014826", mac: "52:54:00:33:64:68", ip: "192.168.50.162"}
	I1024 20:12:31.315448   49071 main.go:141] libmachine: (no-preload-014826) Reserved static IP address: 192.168.50.162
	I1024 20:12:31.315465   49071 main.go:141] libmachine: (no-preload-014826) Waiting for SSH to be available...
	I1024 20:12:31.315483   49071 main.go:141] libmachine: (no-preload-014826) DBG | Getting to WaitForSSH function...
	I1024 20:12:31.318209   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318611   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.318653   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.318819   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH client type: external
	I1024 20:12:31.318871   49071 main.go:141] libmachine: (no-preload-014826) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa (-rw-------)
	I1024 20:12:31.318916   49071 main.go:141] libmachine: (no-preload-014826) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 20:12:31.318941   49071 main.go:141] libmachine: (no-preload-014826) DBG | About to run SSH command:
	I1024 20:12:31.318957   49071 main.go:141] libmachine: (no-preload-014826) DBG | exit 0
	I1024 20:12:31.414054   49071 main.go:141] libmachine: (no-preload-014826) DBG | SSH cmd err, output: <nil>: 
	I1024 20:12:31.414566   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetConfigRaw
	I1024 20:12:31.415326   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.418120   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418549   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.418582   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.418808   49071 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/config.json ...
	I1024 20:12:31.419009   49071 machine.go:88] provisioning docker machine ...
	I1024 20:12:31.419033   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:31.419222   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419399   49071 buildroot.go:166] provisioning hostname "no-preload-014826"
	I1024 20:12:31.419423   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.419578   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.421861   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422241   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.422273   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.422501   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.422676   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.422847   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.423066   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.423250   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.423707   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.423724   49071 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-014826 && echo "no-preload-014826" | sudo tee /etc/hostname
	I1024 20:12:31.557472   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-014826
	
	I1024 20:12:31.557504   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.560529   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.560928   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.560979   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.561201   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.561457   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561654   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.561817   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.561968   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.562329   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.562357   49071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-014826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-014826/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-014826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:12:31.694896   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:12:31.694927   49071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9023/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9023/.minikube}
	I1024 20:12:31.694948   49071 buildroot.go:174] setting up certificates
	I1024 20:12:31.694959   49071 provision.go:83] configureAuth start
	I1024 20:12:31.694967   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetMachineName
	I1024 20:12:31.695264   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:31.697858   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698148   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.698176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.698357   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.700982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701332   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.701364   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.701570   49071 provision.go:138] copyHostCerts
	I1024 20:12:31.701625   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem, removing ...
	I1024 20:12:31.701642   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem
	I1024 20:12:31.701733   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/ca.pem (1078 bytes)
	I1024 20:12:31.701845   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem, removing ...
	I1024 20:12:31.701857   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem
	I1024 20:12:31.701883   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/cert.pem (1123 bytes)
	I1024 20:12:31.701947   49071 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem, removing ...
	I1024 20:12:31.701956   49071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem
	I1024 20:12:31.701978   49071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9023/.minikube/key.pem (1679 bytes)
	I1024 20:12:31.702043   49071 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem org=jenkins.no-preload-014826 san=[192.168.50.162 192.168.50.162 localhost 127.0.0.1 minikube no-preload-014826]
	I1024 20:12:31.798568   49071 provision.go:172] copyRemoteCerts
	I1024 20:12:31.798622   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:12:31.798642   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.801859   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802237   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.802269   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.802465   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.802672   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.802867   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.803027   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:31.891633   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1024 20:12:31.916451   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 20:12:31.937924   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:12:31.961360   49071 provision.go:86] duration metric: configureAuth took 266.390893ms
	I1024 20:12:31.961384   49071 buildroot.go:189] setting minikube options for container-runtime
	I1024 20:12:31.961573   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:12:31.961660   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:31.964354   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964662   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:31.964719   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:31.964798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:31.965002   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965170   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:31.965329   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:31.965516   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:31.965961   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:31.965983   49071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:12:32.275884   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:12:32.275911   49071 machine.go:91] provisioned docker machine in 856.887593ms
	I1024 20:12:32.275923   49071 start.go:300] post-start starting for "no-preload-014826" (driver="kvm2")
	I1024 20:12:32.275935   49071 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:12:32.275957   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.276268   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:12:32.276298   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.279248   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279642   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.279678   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.279798   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.279985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.280182   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.280455   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.371931   49071 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:12:32.375989   49071 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 20:12:32.376009   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/addons for local assets ...
	I1024 20:12:32.376077   49071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9023/.minikube/files for local assets ...
	I1024 20:12:32.376173   49071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem -> 162982.pem in /etc/ssl/certs
	I1024 20:12:32.376295   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:12:32.385018   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:32.408697   49071 start.go:303] post-start completed in 132.759815ms
	I1024 20:12:32.408719   49071 fix.go:56] fixHost completed within 21.530244363s
	I1024 20:12:32.408744   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.411800   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412155   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.412189   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.412363   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.412574   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412741   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.412916   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.413083   49071 main.go:141] libmachine: Using SSH client type: native
	I1024 20:12:32.413469   49071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.162 22 <nil> <nil>}
	I1024 20:12:32.413483   49071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 20:12:32.534092   49071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698178352.477877903
	
	I1024 20:12:32.534116   49071 fix.go:206] guest clock: 1698178352.477877903
	I1024 20:12:32.534127   49071 fix.go:219] Guest: 2023-10-24 20:12:32.477877903 +0000 UTC Remote: 2023-10-24 20:12:32.408724059 +0000 UTC m=+364.183674654 (delta=69.153844ms)
	I1024 20:12:32.534153   49071 fix.go:190] guest clock delta is within tolerance: 69.153844ms
	I1024 20:12:32.534159   49071 start.go:83] releasing machines lock for "no-preload-014826", held for 21.655714466s
	I1024 20:12:32.534185   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.534468   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:32.537523   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.537932   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.537961   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.538160   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538690   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.538919   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:12:32.539004   49071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:12:32.539089   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.539138   49071 ssh_runner.go:195] Run: cat /version.json
	I1024 20:12:32.539166   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:12:32.542176   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542308   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542652   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542689   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:32.542714   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542732   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:32.542981   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.542985   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:12:32.543207   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543214   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:12:32.543387   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543429   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:12:32.543573   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.543579   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:12:32.631242   49071 ssh_runner.go:195] Run: systemctl --version
	I1024 20:12:32.657695   49071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:12:32.808471   49071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 20:12:32.815640   49071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 20:12:32.815712   49071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:12:32.830198   49071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:12:32.830219   49071 start.go:472] detecting cgroup driver to use...
	I1024 20:12:32.830295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:12:32.845231   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:12:32.863283   49071 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:12:32.863328   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:12:32.878295   49071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:12:32.894182   49071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:12:33.024491   49071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:12:33.156548   49071 docker.go:214] disabling docker service ...
	I1024 20:12:33.156621   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:12:33.169940   49071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:12:33.182368   49071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:12:28.804366   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.806145   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.806217   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:30.193022   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:32.195173   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:33.297156   49071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:12:33.434526   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:12:33.453482   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:12:33.471594   49071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:12:33.471665   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.481491   49071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:12:33.481563   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.490505   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.500003   49071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:12:33.509825   49071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:12:33.524014   49071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:12:33.532876   49071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1024 20:12:33.532936   49071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1024 20:12:33.545922   49071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:12:33.554519   49071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:12:33.661858   49071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:12:33.867286   49071 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:12:33.867361   49071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:12:33.873180   49071 start.go:540] Will wait 60s for crictl version
	I1024 20:12:33.873259   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:33.877238   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:12:33.918479   49071 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1024 20:12:33.918624   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:33.970986   49071 ssh_runner.go:195] Run: crio --version
	I1024 20:12:34.026667   49071 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1024 20:12:33.278190   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:33.777448   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.277381   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:12:34.320204   50077 api_server.go:72] duration metric: took 1.574651034s to wait for apiserver process to appear ...
	I1024 20:12:34.320230   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:12:34.320258   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.320744   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.320773   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.321162   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1024 20:12:34.821724   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:34.028144   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetIP
	I1024 20:12:34.031311   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031699   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:12:34.031733   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:12:34.031888   49071 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1024 20:12:34.036386   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:34.052307   49071 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:12:34.052360   49071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:12:34.099209   49071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1024 20:12:34.099236   49071 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 20:12:34.099291   49071 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.099331   49071 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.099414   49071 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.099497   49071 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 20:12:34.099512   49071 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.099547   49071 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.099575   49071 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101069   49071 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.101083   49071 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.101096   49071 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 20:12:34.101077   49071 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.101135   49071 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.101147   49071 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.101173   49071 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.101428   49071 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.283586   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.292930   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.294280   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.303296   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1024 20:12:34.314337   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.323356   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.327726   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.373724   49071 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1024 20:12:34.373774   49071 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.373819   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.466499   49071 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1024 20:12:34.466540   49071 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.466582   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.487167   49071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.489929   49071 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1024 20:12:34.489986   49071 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.490027   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588137   49071 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1024 20:12:34.588178   49071 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.588206   49071 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1024 20:12:34.588231   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588248   49071 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.588286   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588308   49071 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1024 20:12:34.588330   49071 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.588340   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1024 20:12:34.588358   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588388   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1024 20:12:34.588410   49071 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1024 20:12:34.588427   49071 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.588447   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:12:34.588448   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1024 20:12:34.605099   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 20:12:34.693897   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1024 20:12:34.694097   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1024 20:12:34.694204   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.707142   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:12:34.707184   49071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1024 20:12:34.707265   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1024 20:12:34.707388   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:34.707384   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1024 20:12:34.707516   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:34.722106   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1024 20:12:34.722205   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:34.776997   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1024 20:12:34.777019   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777067   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1024 20:12:34.777089   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1024 20:12:34.777180   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:34.804122   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1024 20:12:34.804241   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:34.814486   49071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 20:12:34.814532   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1024 20:12:34.814567   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1024 20:12:34.814607   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1024 20:12:34.814634   49071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:38.115460   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (3.338366217s)
	I1024 20:12:38.115492   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1024 20:12:38.115516   49071 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115548   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3: (3.338341429s)
	I1024 20:12:38.115570   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1024 20:12:38.115586   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1024 20:12:38.115618   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3: (3.311351093s)
	I1024 20:12:38.115644   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1024 20:12:38.115650   49071 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.30100028s)
	I1024 20:12:38.115665   49071 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1024 20:12:34.807460   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.307370   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:34.696540   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:37.192160   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.822511   50077 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1024 20:12:39.822561   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.734083   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.734125   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.734161   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:40.777985   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1024 20:12:40.778037   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1024 20:12:40.822134   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.042292   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.042343   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.321887   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.363625   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.363682   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:41.821995   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:41.828080   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1024 20:12:41.828114   50077 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1024 20:12:42.321381   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:12:42.331626   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:12:42.342584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:12:42.342614   50077 api_server.go:131] duration metric: took 8.022377051s to wait for apiserver health ...
	I1024 20:12:42.342626   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:12:42.342634   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:42.344676   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:12:42.346118   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:12:42.363399   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:12:42.389481   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:12:42.403326   50077 system_pods.go:59] 7 kube-system pods found
	I1024 20:12:42.403370   50077 system_pods.go:61] "coredns-5644d7b6d9-x567q" [1dc7f1c2-4997-4330-a9bc-b914b1c1db9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:12:42.403381   50077 system_pods.go:61] "etcd-old-k8s-version-467375" [62c8ab28-033f-43fa-96b2-e127d8d46730] Running
	I1024 20:12:42.403389   50077 system_pods.go:61] "kube-apiserver-old-k8s-version-467375" [87c58a79-9f12-4be3-a450-69aa22674541] Running
	I1024 20:12:42.403398   50077 system_pods.go:61] "kube-controller-manager-old-k8s-version-467375" [6bf66f9f-1431-4b3f-b186-528945c54a63] Running
	I1024 20:12:42.403412   50077 system_pods.go:61] "kube-proxy-jdvck" [d35f42b9-9be8-43ee-8434-3d557e31bfde] Running
	I1024 20:12:42.403418   50077 system_pods.go:61] "kube-scheduler-old-k8s-version-467375" [63ae0d31-ace3-4490-a2e8-ed110e3a1072] Running
	I1024 20:12:42.403424   50077 system_pods.go:61] "storage-provisioner" [9105f8d8-3aa1-422d-acf2-9f83e9ede8af] Running
	I1024 20:12:42.403431   50077 system_pods.go:74] duration metric: took 13.927429ms to wait for pod list to return data ...
	I1024 20:12:42.403440   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:12:42.408844   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:12:42.408890   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:12:42.408905   50077 node_conditions.go:105] duration metric: took 5.459392ms to run NodePressure ...
	I1024 20:12:42.408926   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:12:42.701645   50077 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:12:42.707084   50077 retry.go:31] will retry after 366.455415ms: kubelet not initialised
	I1024 20:12:39.807495   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:42.306172   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:39.193434   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:41.195135   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.694847   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:43.078083   50077 retry.go:31] will retry after 411.231242ms: kubelet not initialised
	I1024 20:12:43.494711   50077 retry.go:31] will retry after 768.972767ms: kubelet not initialised
	I1024 20:12:44.268690   50077 retry.go:31] will retry after 693.655783ms: kubelet not initialised
	I1024 20:12:45.186580   50077 retry.go:31] will retry after 1.610937297s: kubelet not initialised
	I1024 20:12:46.803897   50077 retry.go:31] will retry after 959.133509ms: kubelet not initialised
	I1024 20:12:47.768260   50077 retry.go:31] will retry after 1.51466069s: kubelet not initialised
	I1024 20:12:45.464752   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.34915976s)
	I1024 20:12:45.464779   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1024 20:12:45.464821   49071 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:45.464899   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1024 20:12:46.936699   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.471766425s)
	I1024 20:12:46.936725   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1024 20:12:46.936750   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:46.936790   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1024 20:12:44.806094   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:46.807137   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:45.696196   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:48.192732   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:49.288179   50077 retry.go:31] will retry after 5.048749504s: kubelet not initialised
	I1024 20:12:49.615688   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (2.678859869s)
	I1024 20:12:49.615726   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1024 20:12:49.615763   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:49.615840   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1024 20:12:51.387159   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.771279542s)
	I1024 20:12:51.387185   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1024 20:12:51.387209   49071 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:51.387258   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1024 20:12:52.868127   49071 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.480840395s)
	I1024 20:12:52.868158   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1024 20:12:52.868184   49071 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:52.868233   49071 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1024 20:12:49.304156   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:51.305456   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:53.307726   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:50.195756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:52.196133   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.342759   50077 retry.go:31] will retry after 8.402807892s: kubelet not initialised
	I1024 20:12:53.617841   49071 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17485-9023/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1024 20:12:53.617883   49071 cache_images.go:123] Successfully loaded all cached images
	I1024 20:12:53.617889   49071 cache_images.go:92] LoadImages completed in 19.518639759s
	I1024 20:12:53.617972   49071 ssh_runner.go:195] Run: crio config
	I1024 20:12:53.677157   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:12:53.677181   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:12:53.677198   49071 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:12:53.677215   49071 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.162 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-014826 NodeName:no-preload-014826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:12:53.677386   49071 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-014826"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:12:53.677482   49071 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-014826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:12:53.677552   49071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:12:53.688840   49071 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:12:53.688904   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:12:53.700095   49071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1024 20:12:53.717176   49071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:12:53.737316   49071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1024 20:12:53.756100   49071 ssh_runner.go:195] Run: grep 192.168.50.162	control-plane.minikube.internal$ /etc/hosts
	I1024 20:12:53.760013   49071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:12:53.771571   49071 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826 for IP: 192.168.50.162
	I1024 20:12:53.771601   49071 certs.go:190] acquiring lock for shared ca certs: {Name:mk0147458e857a8d1cc546ce68803c47275ad5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:12:53.771752   49071 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key
	I1024 20:12:53.771811   49071 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key
	I1024 20:12:53.771896   49071 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.key
	I1024 20:12:53.771975   49071 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key.1b8245f8
	I1024 20:12:53.772056   49071 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key
	I1024 20:12:53.772205   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem (1338 bytes)
	W1024 20:12:53.772250   49071 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298_empty.pem, impossibly tiny 0 bytes
	I1024 20:12:53.772262   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 20:12:53.772303   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/ca.pem (1078 bytes)
	I1024 20:12:53.772333   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:12:53.772354   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/certs/home/jenkins/minikube-integration/17485-9023/.minikube/certs/key.pem (1679 bytes)
	I1024 20:12:53.772397   49071 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem (1708 bytes)
	I1024 20:12:53.773081   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:12:53.797387   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 20:12:53.822084   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:12:53.846401   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 20:12:53.869361   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:12:53.891519   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:12:53.914051   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:12:53.935925   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1024 20:12:53.958389   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/ssl/certs/162982.pem --> /usr/share/ca-certificates/162982.pem (1708 bytes)
	I1024 20:12:53.982011   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:12:54.005921   49071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9023/.minikube/certs/16298.pem --> /usr/share/ca-certificates/16298.pem (1338 bytes)
	I1024 20:12:54.029793   49071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:12:54.047319   49071 ssh_runner.go:195] Run: openssl version
	I1024 20:12:54.053493   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16298.pem && ln -fs /usr/share/ca-certificates/16298.pem /etc/ssl/certs/16298.pem"
	I1024 20:12:54.064414   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069060   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:10 /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.069115   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16298.pem
	I1024 20:12:54.075137   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16298.pem /etc/ssl/certs/51391683.0"
	I1024 20:12:54.088046   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162982.pem && ln -fs /usr/share/ca-certificates/162982.pem /etc/ssl/certs/162982.pem"
	I1024 20:12:54.099949   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104810   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:10 /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.104867   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162982.pem
	I1024 20:12:54.110617   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162982.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:12:54.122160   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:12:54.133062   49071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137858   49071 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.137922   49071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:12:54.144146   49071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:12:54.155998   49071 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:12:54.160989   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 20:12:54.167441   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 20:12:54.173797   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 20:12:54.180320   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 20:12:54.186876   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 20:12:54.193624   49071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 20:12:54.200066   49071 kubeadm.go:404] StartCluster: {Name:no-preload-014826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:no-preload-014826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:12:54.200165   49071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:12:54.200202   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:12:54.253207   49071 cri.go:89] found id: ""
	I1024 20:12:54.253267   49071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:12:54.264316   49071 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 20:12:54.264348   49071 kubeadm.go:636] restartCluster start
	I1024 20:12:54.264404   49071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 20:12:54.276382   49071 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.277506   49071 kubeconfig.go:92] found "no-preload-014826" server: "https://192.168.50.162:8443"
	I1024 20:12:54.279888   49071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 20:12:54.290005   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.290052   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.302383   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.302400   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.302447   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.315130   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:54.815483   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:54.815574   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:54.827862   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.315372   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.315430   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.328409   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.816079   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:55.816141   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:55.829755   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.315782   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.315869   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.329006   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:56.815526   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:56.815621   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:56.828167   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.315692   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.315781   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.328590   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:57.816175   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:57.816250   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:57.832014   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:55.805830   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.810013   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:54.692702   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:57.192210   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.750533   50077 retry.go:31] will retry after 7.667287878s: kubelet not initialised
	I1024 20:12:58.315841   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.315922   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.329743   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:58.815711   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:58.815779   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:58.828215   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.315817   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.315924   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.328911   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:12:59.815493   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:12:59.815583   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:12:59.829684   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.316215   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.316294   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.330227   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.815830   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:00.815901   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:00.828290   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.315228   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.315319   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.329972   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:01.815426   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:01.815495   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:01.829199   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.315754   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.315834   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.328463   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:02.816091   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:02.816175   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:02.830548   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:00.304116   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:02.304336   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:12:59.193761   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:01.692343   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.693961   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:03.315186   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.315249   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.327729   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:03.815302   49071 api_server.go:166] Checking apiserver status ...
	I1024 20:13:03.815389   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 20:13:03.827308   49071 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 20:13:04.290952   49071 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 20:13:04.290993   49071 kubeadm.go:1128] stopping kube-system containers ...
	I1024 20:13:04.291005   49071 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 20:13:04.291078   49071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:13:04.333468   49071 cri.go:89] found id: ""
	I1024 20:13:04.333543   49071 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 20:13:04.351889   49071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:13:04.362176   49071 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:13:04.362251   49071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372650   49071 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 20:13:04.372683   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:04.495803   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.080838   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.290640   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.379839   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:05.458741   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:13:05.458843   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.475039   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:05.997438   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.496596   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:06.996587   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.496933   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:13:07.514268   49071 api_server.go:72] duration metric: took 2.055524654s to wait for apiserver process to appear ...
	I1024 20:13:07.514294   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:13:07.514310   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.514802   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:07.514840   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:07.515243   49071 api_server.go:269] stopped: https://192.168.50.162:8443/healthz: Get "https://192.168.50.162:8443/healthz": dial tcp 192.168.50.162:8443: connect: connection refused
	I1024 20:13:08.015912   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:04.306097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:06.805484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:05.698099   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:08.196336   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.424613   50077 retry.go:31] will retry after 17.161095389s: kubelet not initialised
	I1024 20:13:12.512885   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.512923   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.512936   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.564368   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.564415   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:12.564435   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:12.578188   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 20:13:12.578210   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 20:13:13.015415   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.022900   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.022939   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:09.305906   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:11.805107   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:10.693989   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:12.696233   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:13.515731   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:13.520510   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 20:13:13.520565   49071 api_server.go:103] status: https://192.168.50.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 20:13:14.015693   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:13:14.021308   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:13:14.029247   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:13:14.029271   49071 api_server.go:131] duration metric: took 6.514969351s to wait for apiserver health ...
	I1024 20:13:14.029281   49071 cni.go:84] Creating CNI manager for ""
	I1024 20:13:14.029289   49071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:13:14.031023   49071 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:13:14.032390   49071 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:13:14.042542   49071 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:13:14.061827   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:13:14.077006   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:13:14.077041   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 20:13:14.077058   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 20:13:14.077068   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 20:13:14.077078   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 20:13:14.077088   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 20:13:14.077102   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 20:13:14.077114   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:13:14.077125   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 20:13:14.077140   49071 system_pods.go:74] duration metric: took 15.296766ms to wait for pod list to return data ...
	I1024 20:13:14.077150   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:13:14.080871   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:13:14.080896   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:13:14.080908   49071 node_conditions.go:105] duration metric: took 3.7473ms to run NodePressure ...
	I1024 20:13:14.080921   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 20:13:14.292868   49071 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297583   49071 kubeadm.go:787] kubelet initialised
	I1024 20:13:14.297611   49071 kubeadm.go:788] duration metric: took 4.717728ms waiting for restarted kubelet to initialise ...
	I1024 20:13:14.297621   49071 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:14.303742   49071 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.309570   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309600   49071 pod_ready.go:81] duration metric: took 5.835917ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.309608   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.309616   49071 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.316423   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316453   49071 pod_ready.go:81] duration metric: took 6.829373ms waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.316577   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "etcd-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.316593   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.325238   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325271   49071 pod_ready.go:81] duration metric: took 8.669582ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.325280   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-apiserver-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.325288   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.466293   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466319   49071 pod_ready.go:81] duration metric: took 141.023699ms waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.466331   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.466342   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:14.865820   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865855   49071 pod_ready.go:81] duration metric: took 399.504017ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:14.865867   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-proxy-hvphg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:14.865876   49071 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.266786   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266820   49071 pod_ready.go:81] duration metric: took 400.936146ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.266833   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "kube-scheduler-no-preload-014826" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.266844   49071 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:15.666547   49071 pod_ready.go:97] node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666582   49071 pod_ready.go:81] duration metric: took 399.72944ms waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:13:15.666596   49071 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-014826" hosting pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:15.666617   49071 pod_ready.go:38] duration metric: took 1.368975115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:15.666636   49071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:13:15.686675   49071 ops.go:34] apiserver oom_adj: -16
	I1024 20:13:15.686696   49071 kubeadm.go:640] restartCluster took 21.422341568s
	I1024 20:13:15.686706   49071 kubeadm.go:406] StartCluster complete in 21.486646231s
	I1024 20:13:15.686737   49071 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.686823   49071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:13:15.688903   49071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:13:15.689192   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:13:15.689321   49071 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:13:15.689405   49071 addons.go:69] Setting storage-provisioner=true in profile "no-preload-014826"
	I1024 20:13:15.689423   49071 addons.go:231] Setting addon storage-provisioner=true in "no-preload-014826"
	I1024 20:13:15.689462   49071 addons.go:69] Setting metrics-server=true in profile "no-preload-014826"
	I1024 20:13:15.689490   49071 addons.go:231] Setting addon metrics-server=true in "no-preload-014826"
	W1024 20:13:15.689512   49071 addons.go:240] addon metrics-server should already be in state true
	I1024 20:13:15.689560   49071 host.go:66] Checking if "no-preload-014826" exists ...
	W1024 20:13:15.689463   49071 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:13:15.689649   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.689445   49071 addons.go:69] Setting default-storageclass=true in profile "no-preload-014826"
	I1024 20:13:15.689716   49071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-014826"
	I1024 20:13:15.689431   49071 config.go:182] Loaded profile config "no-preload-014826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:13:15.690018   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690051   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690060   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690086   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.690173   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.690225   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.695832   49071 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-014826" context rescaled to 1 replicas
	I1024 20:13:15.695868   49071 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:13:15.698104   49071 out.go:177] * Verifying Kubernetes components...
	I1024 20:13:15.701812   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:13:15.708637   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I1024 20:13:15.709086   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.709579   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I1024 20:13:15.709941   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.709959   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710044   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.710478   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.710629   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.710640   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.710943   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.710954   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711125   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.711367   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1024 20:13:15.711702   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.711739   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.711852   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.712441   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.712453   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.713081   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.713312   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.717141   49071 addons.go:231] Setting addon default-storageclass=true in "no-preload-014826"
	W1024 20:13:15.717173   49071 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:13:15.717201   49071 host.go:66] Checking if "no-preload-014826" exists ...
	I1024 20:13:15.717655   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.717688   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.729423   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I1024 20:13:15.730145   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.730747   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.730763   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.730811   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1024 20:13:15.731224   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.731294   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.731487   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.731691   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.731704   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.732239   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.732712   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.733909   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736374   49071 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:13:15.734682   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.736231   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1024 20:13:15.738165   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:13:15.738178   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:13:15.738198   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739819   49071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:13:15.741717   49071 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.741733   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:13:15.741752   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.739693   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.742202   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.742374   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.742389   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.742978   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.743000   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.743088   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.743253   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.743408   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.743896   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.744551   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.745028   49071 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:13:15.745145   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745266   49071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:13:15.745462   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.745486   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.745735   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.745870   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.745956   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.746023   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.782650   49071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I1024 20:13:15.783126   49071 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:13:15.783699   49071 main.go:141] libmachine: Using API Version  1
	I1024 20:13:15.783721   49071 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:13:15.784051   49071 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:13:15.784270   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetState
	I1024 20:13:15.786114   49071 main.go:141] libmachine: (no-preload-014826) Calling .DriverName
	I1024 20:13:15.786409   49071 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.786424   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:13:15.786439   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHHostname
	I1024 20:13:15.788982   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789347   49071 main.go:141] libmachine: (no-preload-014826) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:64:68", ip: ""} in network mk-no-preload-014826: {Iface:virbr2 ExpiryTime:2023-10-24 21:12:24 +0000 UTC Type:0 Mac:52:54:00:33:64:68 Iaid: IPaddr:192.168.50.162 Prefix:24 Hostname:no-preload-014826 Clientid:01:52:54:00:33:64:68}
	I1024 20:13:15.789376   49071 main.go:141] libmachine: (no-preload-014826) DBG | domain no-preload-014826 has defined IP address 192.168.50.162 and MAC address 52:54:00:33:64:68 in network mk-no-preload-014826
	I1024 20:13:15.789622   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHPort
	I1024 20:13:15.789838   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHKeyPath
	I1024 20:13:15.790047   49071 main.go:141] libmachine: (no-preload-014826) Calling .GetSSHUsername
	I1024 20:13:15.790195   49071 sshutil.go:53] new ssh client: &{IP:192.168.50.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/no-preload-014826/id_rsa Username:docker}
	I1024 20:13:15.870753   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:13:15.870771   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:13:15.893772   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:13:15.893799   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:13:15.916179   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:13:15.928570   49071 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.928596   49071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:13:15.950610   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:13:15.987129   49071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:13:15.987945   49071 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 20:13:15.987993   49071 node_ready.go:35] waiting up to 6m0s for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53431699s)
	I1024 20:13:17.450534   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.499892733s)
	I1024 20:13:17.450586   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450597   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.450609   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.450621   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451126   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451143   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451152   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451160   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451176   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451180   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451186   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451190   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.451200   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.451211   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451380   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451410   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.451415   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451429   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.451430   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.451442   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.464276   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.464297   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.464561   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.464578   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.464585   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626276   49071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.639098267s)
	I1024 20:13:17.626344   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626364   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.626686   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.626711   49071 main.go:141] libmachine: (no-preload-014826) DBG | Closing plugin on server side
	I1024 20:13:17.626713   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.626765   49071 main.go:141] libmachine: Making call to close driver server
	I1024 20:13:17.626779   49071 main.go:141] libmachine: (no-preload-014826) Calling .Close
	I1024 20:13:17.627054   49071 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:13:17.627071   49071 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:13:17.627082   49071 addons.go:467] Verifying addon metrics-server=true in "no-preload-014826"
	I1024 20:13:17.629289   49071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:13:17.630781   49071 addons.go:502] enable addons completed in 1.94145774s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:13:18.084997   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:13.805526   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.807970   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:18.305400   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:15.194668   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:17.694096   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:20.085063   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:22.086260   49071 node_ready.go:58] node "no-preload-014826" has status "Ready":"False"
	I1024 20:13:23.087300   49071 node_ready.go:49] node "no-preload-014826" has status "Ready":"True"
	I1024 20:13:23.087338   49071 node_ready.go:38] duration metric: took 7.0993157s waiting for node "no-preload-014826" to be "Ready" ...
	I1024 20:13:23.087350   49071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:23.093785   49071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101553   49071 pod_ready.go:92] pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:23.101576   49071 pod_ready.go:81] duration metric: took 7.766543ms waiting for pod "coredns-5dd5756b68-gnn8j" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:23.101588   49071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:20.808097   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:23.306150   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:19.696002   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:22.195097   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.592041   50077 kubeadm.go:787] kubelet initialised
	I1024 20:13:27.592064   50077 kubeadm.go:788] duration metric: took 44.890387595s waiting for restarted kubelet to initialise ...
	I1024 20:13:27.592071   50077 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:13:27.596611   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601949   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.601972   50077 pod_ready.go:81] duration metric: took 5.342417ms waiting for pod "coredns-5644d7b6d9-kbdsh" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.601979   50077 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607096   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.607118   50077 pod_ready.go:81] duration metric: took 5.132259ms waiting for pod "coredns-5644d7b6d9-x567q" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.607130   50077 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.611971   50077 pod_ready.go:92] pod "etcd-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.611991   50077 pod_ready.go:81] duration metric: took 4.854068ms waiting for pod "etcd-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.612002   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.616975   50077 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.616995   50077 pod_ready.go:81] duration metric: took 4.985984ms waiting for pod "kube-apiserver-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.617006   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620272   49071 pod_ready.go:92] pod "etcd-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.620294   49071 pod_ready.go:81] duration metric: took 1.518699618s waiting for pod "etcd-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.620304   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625954   49071 pod_ready.go:92] pod "kube-apiserver-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:24.625975   49071 pod_ready.go:81] duration metric: took 5.666043ms waiting for pod "kube-apiserver-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:24.625985   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096309   49071 pod_ready.go:92] pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.096338   49071 pod_ready.go:81] duration metric: took 2.470345358s waiting for pod "kube-controller-manager-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.096363   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101417   49071 pod_ready.go:92] pod "kube-proxy-hvphg" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.101439   49071 pod_ready.go:81] duration metric: took 5.060638ms waiting for pod "kube-proxy-hvphg" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.101457   49071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487627   49071 pod_ready.go:92] pod "kube-scheduler-no-preload-014826" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.487655   49071 pod_ready.go:81] duration metric: took 386.189892ms waiting for pod "kube-scheduler-no-preload-014826" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.487668   49071 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:25.805375   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:28.304314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:24.199489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:26.694339   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:27.990781   50077 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:27.990808   50077 pod_ready.go:81] duration metric: took 373.794401ms waiting for pod "kube-controller-manager-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:27.990817   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389532   50077 pod_ready.go:92] pod "kube-proxy-jdvck" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.389554   50077 pod_ready.go:81] duration metric: took 398.730628ms waiting for pod "kube-proxy-jdvck" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.389562   50077 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791217   50077 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace has status "Ready":"True"
	I1024 20:13:28.791245   50077 pod_ready.go:81] duration metric: took 401.675656ms waiting for pod "kube-scheduler-old-k8s-version-467375" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:28.791259   50077 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	I1024 20:13:31.101273   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.797752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.294823   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:30.305423   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:32.804966   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:29.196181   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:31.694405   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:33.597846   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.098571   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.295326   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.295502   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:35.307544   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:37.804734   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:34.193583   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:36.194545   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.693640   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.598114   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:38.295582   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.797360   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:40.303674   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:42.305932   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:41.193409   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.694630   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.097684   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.599550   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:43.295412   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.295801   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.795437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:44.806885   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:47.305513   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:45.695737   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.194597   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:48.098390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.098465   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.598464   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.796354   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.296299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:49.806019   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.304671   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:50.692678   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:52.693810   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.099808   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.596982   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.795042   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.795788   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:54.305480   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:56.805003   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:55.192666   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:57.192992   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.598091   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:02.097277   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.296748   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.799381   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.304665   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.305140   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:13:59.193682   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:01.694286   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.098871   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.598019   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.297114   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.796174   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:03.804391   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:05.805262   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.304535   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:04.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:06.692751   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.693756   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:08.598278   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.598744   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:09.296355   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.794188   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:10.805023   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.304639   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:11.193179   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.696086   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.097069   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.598606   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:13.795184   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.797064   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:15.804980   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.304229   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:16.193316   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.193452   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.099418   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.597767   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.598478   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:18.294610   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.295299   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.295580   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.304386   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.304955   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:20.693442   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:22.695298   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.598688   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.098094   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.796039   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.294583   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:24.804411   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:26.805975   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:25.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:27.194309   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.098448   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.597809   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.295004   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.296770   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.302945   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.303224   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.305333   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:29.693713   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:31.693887   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.695638   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.599337   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.098527   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:33.795335   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.796128   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.798347   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:35.307171   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:37.806058   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:36.192382   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.195932   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:38.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.098830   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.598203   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.295075   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.796827   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.304919   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.805069   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:40.693934   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:42.694102   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.598267   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.097792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:45.297437   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.795616   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.805647   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:46.806849   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:44.695195   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:47.194156   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.597390   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.099367   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:50.294686   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:52.297230   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.306571   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.804484   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:49.194481   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:51.693650   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.694257   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.597760   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.597897   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:54.794752   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.795666   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:53.805053   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.303997   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.304326   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:56.193984   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:58.693506   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.098488   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.098937   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:14:59.297834   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:01.795492   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.305557   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:02.805113   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:00.694107   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.194559   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.597853   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.598764   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:03.798231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.296567   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:04.805204   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:06.806277   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:05.693959   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.194793   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.098369   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.099343   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.597632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:08.795941   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.295163   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:09.303880   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:11.308399   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:10.692947   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:12.694115   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.098788   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.598778   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.297546   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.799219   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:13.804941   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.805508   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.805620   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:15.194071   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:17.692344   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.099461   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.598528   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:18.294855   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.795197   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:20.303894   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:22.807109   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:19.693273   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:21.694158   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.694489   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:24.598739   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.610829   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:23.295231   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.296151   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.794796   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:25.304009   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:27.304056   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:26.194236   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:28.692475   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.097722   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.099314   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.795050   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.795981   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:29.304915   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:31.306232   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:30.693731   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.193919   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.100924   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.597972   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:37.598135   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:34.295967   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.297180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:33.809488   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:36.305924   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:35.696190   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.193380   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.098563   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:42.597443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.794953   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.794982   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:38.806251   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:41.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:40.694041   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.192299   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:44.598402   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.097519   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.294813   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.297991   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.794454   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:43.803978   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.804440   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.805016   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:45.192754   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:47.693494   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.098171   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.598327   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.795988   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.296853   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:49.806503   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:51.807986   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:50.193124   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:52.692831   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.097085   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.600496   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.795189   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.795825   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.304728   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:56.305314   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:54.696873   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:57.193194   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.098128   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.099894   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.295180   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.295325   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:58.804230   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:00.804430   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.303762   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:15:59.193752   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:01.194280   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.694730   49198 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.597363   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.598434   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.599790   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:03.295998   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.298356   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.795402   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:05.305076   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:07.805412   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:04.884378   49198 pod_ready.go:81] duration metric: took 4m0.000380407s waiting for pod "metrics-server-57f55c9bc5-pv9ww" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:04.884408   49198 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:04.884437   49198 pod_ready.go:38] duration metric: took 4m3.201253081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:04.884459   49198 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:04.884488   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:04.884542   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:04.941853   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:04.941878   49198 cri.go:89] found id: ""
	I1024 20:16:04.941889   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:04.941963   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.947250   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:04.947317   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:04.990126   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:04.990151   49198 cri.go:89] found id: ""
	I1024 20:16:04.990163   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:04.990226   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:04.995026   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:04.995086   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:05.045422   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.045441   49198 cri.go:89] found id: ""
	I1024 20:16:05.045449   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:05.045505   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.049931   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:05.049997   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:05.115746   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.115767   49198 cri.go:89] found id: ""
	I1024 20:16:05.115775   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:05.115822   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.120476   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:05.120527   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:05.163487   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.163509   49198 cri.go:89] found id: ""
	I1024 20:16:05.163521   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:05.163580   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.167956   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:05.168027   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:05.209375   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.209403   49198 cri.go:89] found id: ""
	I1024 20:16:05.209412   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:05.209468   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.213932   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:05.213994   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:05.256033   49198 cri.go:89] found id: ""
	I1024 20:16:05.256055   49198 logs.go:284] 0 containers: []
	W1024 20:16:05.256070   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:05.256077   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:05.256130   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:05.313137   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.313163   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:05.313171   49198 cri.go:89] found id: ""
	I1024 20:16:05.313181   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:05.313236   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.319603   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:05.324116   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:05.324138   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:05.364879   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:05.364905   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:05.430314   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:05.430342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:05.488524   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:05.488550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:05.547000   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:05.547029   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:05.561360   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:05.561392   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:05.616215   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:05.616254   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:05.666923   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:05.666955   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:05.707305   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:05.707332   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:05.865943   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:05.865972   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:05.914044   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:05.914070   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:06.370658   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:06.370692   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:06.423891   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:06.423919   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:10.098187   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:12.597089   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.796035   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.796300   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:09.805755   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:11.806246   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:08.967015   49198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:08.982371   49198 api_server.go:72] duration metric: took 4m12.675281905s to wait for apiserver process to appear ...
	I1024 20:16:08.982397   49198 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:08.982431   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:08.982492   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:09.023557   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:09.023575   49198 cri.go:89] found id: ""
	I1024 20:16:09.023582   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:09.023626   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.029901   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:09.029954   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:09.066141   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.066169   49198 cri.go:89] found id: ""
	I1024 20:16:09.066181   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:09.066232   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.071099   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:09.071161   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:09.117898   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:09.117917   49198 cri.go:89] found id: ""
	I1024 20:16:09.117927   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:09.117979   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.122675   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:09.122729   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:09.162628   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.162647   49198 cri.go:89] found id: ""
	I1024 20:16:09.162656   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:09.162711   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.166799   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:09.166859   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:09.203866   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.203894   49198 cri.go:89] found id: ""
	I1024 20:16:09.203904   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:09.203968   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.208141   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:09.208201   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:09.252432   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.252449   49198 cri.go:89] found id: ""
	I1024 20:16:09.252457   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:09.252519   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.257709   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:09.257767   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:09.312883   49198 cri.go:89] found id: ""
	I1024 20:16:09.312908   49198 logs.go:284] 0 containers: []
	W1024 20:16:09.312919   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:09.312926   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:09.312984   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:09.365111   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:09.365138   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.365145   49198 cri.go:89] found id: ""
	I1024 20:16:09.365155   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:09.365215   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.370442   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:09.375055   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:09.375082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:09.440328   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:09.440361   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:09.489007   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:09.489035   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:09.539429   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:09.539467   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:09.591012   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:09.591049   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:09.608336   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:09.608362   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:09.656190   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:09.656216   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:09.704915   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:09.704942   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:09.743847   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:09.743878   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:10.154301   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:10.154342   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:10.296525   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:10.296552   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:10.347731   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:10.347763   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:10.388130   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:10.388157   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:12.931381   49198 api_server.go:253] Checking apiserver healthz at https://192.168.72.10:8443/healthz ...
	I1024 20:16:12.938286   49198 api_server.go:279] https://192.168.72.10:8443/healthz returned 200:
	ok
	I1024 20:16:12.940208   49198 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:12.940228   49198 api_server.go:131] duration metric: took 3.957823811s to wait for apiserver health ...
	I1024 20:16:12.940236   49198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:12.940255   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:12.940311   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:12.985630   49198 cri.go:89] found id: "7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:12.985654   49198 cri.go:89] found id: ""
	I1024 20:16:12.985664   49198 logs.go:284] 1 containers: [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251]
	I1024 20:16:12.985736   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:12.991021   49198 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:12.991094   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:13.031617   49198 cri.go:89] found id: "82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:13.031638   49198 cri.go:89] found id: ""
	I1024 20:16:13.031647   49198 logs.go:284] 1 containers: [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2]
	I1024 20:16:13.031690   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.036956   49198 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:13.037010   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:13.074663   49198 cri.go:89] found id: "9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:13.074683   49198 cri.go:89] found id: ""
	I1024 20:16:13.074692   49198 logs.go:284] 1 containers: [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0]
	I1024 20:16:13.074745   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.079061   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:13.079115   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:13.122923   49198 cri.go:89] found id: "d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.122947   49198 cri.go:89] found id: ""
	I1024 20:16:13.122957   49198 logs.go:284] 1 containers: [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31]
	I1024 20:16:13.123010   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.126914   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:13.126987   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:13.174746   49198 cri.go:89] found id: "a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:13.174781   49198 cri.go:89] found id: ""
	I1024 20:16:13.174791   49198 logs.go:284] 1 containers: [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3]
	I1024 20:16:13.174867   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.179817   49198 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:13.179884   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:13.228560   49198 cri.go:89] found id: "e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.228588   49198 cri.go:89] found id: ""
	I1024 20:16:13.228606   49198 logs.go:284] 1 containers: [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc]
	I1024 20:16:13.228661   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.233182   49198 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:13.233247   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:13.272072   49198 cri.go:89] found id: ""
	I1024 20:16:13.272100   49198 logs.go:284] 0 containers: []
	W1024 20:16:13.272110   49198 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:13.272117   49198 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:13.272174   49198 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:13.317104   49198 cri.go:89] found id: "26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:13.317129   49198 cri.go:89] found id: "2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:13.317137   49198 cri.go:89] found id: ""
	I1024 20:16:13.317148   49198 logs.go:284] 2 containers: [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382]
	I1024 20:16:13.317208   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.327265   49198 ssh_runner.go:195] Run: which crictl
	I1024 20:16:13.331706   49198 logs.go:123] Gathering logs for kube-scheduler [d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31] ...
	I1024 20:16:13.331730   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d23e68e4d4a23c24ac28f4ffcee48779f3868ac1d09c8f1c475e1a021c9a6c31"
	I1024 20:16:13.378259   49198 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:13.378299   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:13.402257   49198 logs.go:123] Gathering logs for kube-apiserver [7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251] ...
	I1024 20:16:13.402289   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7217044d2e0392ecba5903e275d19810b4ce825d431992b5cdc0799bbf56f251"
	I1024 20:16:13.465655   49198 logs.go:123] Gathering logs for kube-controller-manager [e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc] ...
	I1024 20:16:13.465685   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e159067fdfc42f1624489f6283b8855fa3457ce39f99b1e818a921f926fe61cc"
	I1024 20:16:13.521268   49198 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:13.521312   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:13.923501   49198 logs.go:123] Gathering logs for container status ...
	I1024 20:16:13.923550   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:13.976055   49198 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:13.976082   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:14.028953   49198 logs.go:123] Gathering logs for storage-provisioner [26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b] ...
	I1024 20:16:14.028985   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f391c93fe16d500fb10277199dfbc29949320f1e30d9ca4c5f107d6e916f7b"
	I1024 20:16:14.069859   49198 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:14.069887   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:14.196920   49198 logs.go:123] Gathering logs for etcd [82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2] ...
	I1024 20:16:14.196959   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b51425efb505cfd1cfe5f7879beac3786bd68fa23695df599f7b3f6bfd51e2"
	I1024 20:16:14.257588   49198 logs.go:123] Gathering logs for coredns [9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0] ...
	I1024 20:16:14.257617   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e2b63eae7db7c31b7c6c37dea01b7e18b141eb793f7b7ed916763418c4069a0"
	I1024 20:16:14.302980   49198 logs.go:123] Gathering logs for kube-proxy [a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3] ...
	I1024 20:16:14.303019   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9906107f32c176f457b56bcea98cd1c55430efbb47f93ad6ed3a02731b248d3"
	I1024 20:16:14.344441   49198 logs.go:123] Gathering logs for storage-provisioner [2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382] ...
	I1024 20:16:14.344469   49198 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b61033b8afd202fbccc5b52defd0bc2fc65a3004b337386841c76a774fb3382"
	I1024 20:16:16.893365   49198 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:16.893395   49198 system_pods.go:61] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.893404   49198 system_pods.go:61] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.893412   49198 system_pods.go:61] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.893419   49198 system_pods.go:61] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.893426   49198 system_pods.go:61] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.893433   49198 system_pods.go:61] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.893444   49198 system_pods.go:61] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.893456   49198 system_pods.go:61] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.893469   49198 system_pods.go:74] duration metric: took 3.953227014s to wait for pod list to return data ...
	I1024 20:16:16.893483   49198 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:16.895879   49198 default_sa.go:45] found service account: "default"
	I1024 20:16:16.895896   49198 default_sa.go:55] duration metric: took 2.405313ms for default service account to be created ...
	I1024 20:16:16.895903   49198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:16.902189   49198 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:16.902217   49198 system_pods.go:89] "coredns-5dd5756b68-6qq4r" [e27b495c-efe0-45f4-b3b1-1c6d8ed5ed40] Running
	I1024 20:16:16.902225   49198 system_pods.go:89] "etcd-embed-certs-867165" [6d697f6b-0f21-4bfa-82d7-82c476c8de48] Running
	I1024 20:16:16.902232   49198 system_pods.go:89] "kube-apiserver-embed-certs-867165" [46aaf827-a940-40e2-9f06-5dbf6312c9d0] Running
	I1024 20:16:16.902240   49198 system_pods.go:89] "kube-controller-manager-embed-certs-867165" [3b1bfa63-a968-4fa2-a082-7f2eeb341a3e] Running
	I1024 20:16:16.902246   49198 system_pods.go:89] "kube-proxy-thkqr" [55c1a6e9-7a56-499f-a51c-41e4cbb1490d] Running
	I1024 20:16:16.902253   49198 system_pods.go:89] "kube-scheduler-embed-certs-867165" [7fdc8e18-4188-412b-b367-3e410abe1fa0] Running
	I1024 20:16:16.902269   49198 system_pods.go:89] "metrics-server-57f55c9bc5-pv9ww" [6a642ef8-3b64-4cf1-b905-a3c7f510f29f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:16.902281   49198 system_pods.go:89] "storage-provisioner" [e1351874-1865-4d9e-bb77-acd1eaf0023e] Running
	I1024 20:16:16.902292   49198 system_pods.go:126] duration metric: took 6.383517ms to wait for k8s-apps to be running ...
	I1024 20:16:16.902303   49198 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:16.902359   49198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:16.920015   49198 system_svc.go:56] duration metric: took 17.706073ms WaitForService to wait for kubelet.
	I1024 20:16:16.920039   49198 kubeadm.go:581] duration metric: took 4m20.612955305s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:16.920063   49198 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:16.924147   49198 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:16.924167   49198 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:16.924177   49198 node_conditions.go:105] duration metric: took 4.109839ms to run NodePressure ...
	I1024 20:16:16.924187   49198 start.go:228] waiting for startup goroutines ...
	I1024 20:16:16.924194   49198 start.go:233] waiting for cluster config update ...
	I1024 20:16:16.924206   49198 start.go:242] writing updated cluster config ...
	I1024 20:16:16.924490   49198 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:16.973588   49198 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:16.975639   49198 out.go:177] * Done! kubectl is now configured to use "embed-certs-867165" cluster and "default" namespace by default
	I1024 20:16:14.597646   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.598202   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.296652   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.795527   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:14.304610   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:16.305225   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.598694   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.099076   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:21.295897   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:18.804148   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:20.805158   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.304826   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.598167   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.598533   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.598810   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:23.794690   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.796011   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:27.798006   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:25.803034   49708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:26.497612   49708 pod_ready.go:81] duration metric: took 4m0.000149915s waiting for pod "metrics-server-57f55c9bc5-lmxdt" in "kube-system" namespace to be "Ready" ...
	E1024 20:16:26.497657   49708 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:16:26.497666   49708 pod_ready.go:38] duration metric: took 4m3.599625321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:16:26.497682   49708 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:16:26.497709   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:26.497757   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:26.569452   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:26.569479   49708 cri.go:89] found id: ""
	I1024 20:16:26.569489   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:26.569551   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.573824   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:26.573872   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:26.618910   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:26.618939   49708 cri.go:89] found id: ""
	I1024 20:16:26.618946   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:26.618998   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.623675   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:26.623723   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:26.671601   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:26.671621   49708 cri.go:89] found id: ""
	I1024 20:16:26.671628   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:26.671665   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.675997   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:26.676048   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:26.723100   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:26.723124   49708 cri.go:89] found id: ""
	I1024 20:16:26.723133   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:26.723187   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.727780   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:26.727837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:26.765584   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.765608   49708 cri.go:89] found id: ""
	I1024 20:16:26.765618   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:26.765663   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.770062   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:26.770121   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:26.811710   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:26.811728   49708 cri.go:89] found id: ""
	I1024 20:16:26.811736   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:26.811786   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.816125   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:26.816187   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:26.860427   49708 cri.go:89] found id: ""
	I1024 20:16:26.860452   49708 logs.go:284] 0 containers: []
	W1024 20:16:26.860462   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:26.860469   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:26.860532   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:26.905052   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:26.905083   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:26.905091   49708 cri.go:89] found id: ""
	I1024 20:16:26.905100   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:26.905154   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.909590   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:26.913618   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:26.913636   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:26.958127   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:26.958157   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:27.012523   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:27.012555   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:27.059311   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:27.059345   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:27.102879   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:27.102905   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:27.154377   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:27.154409   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:27.197488   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:27.197516   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:27.210530   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:27.210559   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:27.379195   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:27.379225   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:27.826087   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:27.826119   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:27.880305   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:27.880348   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:27.932382   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:27.932417   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:27.979060   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:27.979088   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:29.598843   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:31.598885   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.295090   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:32.295447   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:30.532134   49708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:16:30.547497   49708 api_server.go:72] duration metric: took 4m14.551629626s to wait for apiserver process to appear ...
	I1024 20:16:30.547522   49708 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:16:30.547562   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:30.547627   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:30.588076   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.588097   49708 cri.go:89] found id: ""
	I1024 20:16:30.588104   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:30.588159   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.592397   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:30.592467   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:30.632362   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:30.632380   49708 cri.go:89] found id: ""
	I1024 20:16:30.632389   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:30.632446   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.636647   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:30.636695   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:30.676966   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:30.676997   49708 cri.go:89] found id: ""
	I1024 20:16:30.677005   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:30.677050   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.682153   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:30.682206   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:30.723427   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:30.723449   49708 cri.go:89] found id: ""
	I1024 20:16:30.723458   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:30.723516   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.727674   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:30.727740   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:30.774450   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:30.774473   49708 cri.go:89] found id: ""
	I1024 20:16:30.774482   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:30.774535   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.778753   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:30.778821   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:30.830068   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:30.830094   49708 cri.go:89] found id: ""
	I1024 20:16:30.830104   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:30.830169   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.835133   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:30.835201   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:30.885323   49708 cri.go:89] found id: ""
	I1024 20:16:30.885347   49708 logs.go:284] 0 containers: []
	W1024 20:16:30.885357   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:30.885363   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:30.885423   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:30.925415   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:30.925435   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:30.925440   49708 cri.go:89] found id: ""
	I1024 20:16:30.925447   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:30.925506   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.929723   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:30.933926   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:30.933965   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:30.999217   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:30.999250   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:31.051267   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:31.051300   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:31.107411   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:31.107444   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:31.233980   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:31.234009   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:31.275335   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:31.275362   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:31.329276   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:31.329316   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:31.380149   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:31.380184   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:31.393990   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:31.394016   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:31.440032   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:31.440065   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:31.478413   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:31.478445   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:31.529321   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:31.529349   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:31.578678   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:31.578708   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:33.603558   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.099473   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.295685   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:36.794759   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:34.514152   49708 api_server.go:253] Checking apiserver healthz at https://192.168.61.148:8444/healthz ...
	I1024 20:16:34.520578   49708 api_server.go:279] https://192.168.61.148:8444/healthz returned 200:
	ok
	I1024 20:16:34.522271   49708 api_server.go:141] control plane version: v1.28.3
	I1024 20:16:34.522289   49708 api_server.go:131] duration metric: took 3.974761353s to wait for apiserver health ...
	I1024 20:16:34.522297   49708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:16:34.522318   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:16:34.522363   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:16:34.568260   49708 cri.go:89] found id: "cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:34.568280   49708 cri.go:89] found id: ""
	I1024 20:16:34.568287   49708 logs.go:284] 1 containers: [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928]
	I1024 20:16:34.568336   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.575356   49708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:16:34.575414   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:16:34.623358   49708 cri.go:89] found id: "297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:34.623383   49708 cri.go:89] found id: ""
	I1024 20:16:34.623392   49708 logs.go:284] 1 containers: [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf]
	I1024 20:16:34.623449   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.628721   49708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:16:34.628777   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:16:34.675561   49708 cri.go:89] found id: "5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:34.675583   49708 cri.go:89] found id: ""
	I1024 20:16:34.675592   49708 logs.go:284] 1 containers: [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc]
	I1024 20:16:34.675654   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.681613   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:16:34.681677   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:16:34.722858   49708 cri.go:89] found id: "742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:34.722898   49708 cri.go:89] found id: ""
	I1024 20:16:34.722917   49708 logs.go:284] 1 containers: [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591]
	I1024 20:16:34.722974   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.727310   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:16:34.727376   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:16:34.768365   49708 cri.go:89] found id: "4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:34.768383   49708 cri.go:89] found id: ""
	I1024 20:16:34.768390   49708 logs.go:284] 1 containers: [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139]
	I1024 20:16:34.768436   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.772776   49708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:16:34.772837   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:16:34.825992   49708 cri.go:89] found id: "7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:34.826020   49708 cri.go:89] found id: ""
	I1024 20:16:34.826030   49708 logs.go:284] 1 containers: [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687]
	I1024 20:16:34.826083   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.830957   49708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:16:34.831011   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:16:34.878138   49708 cri.go:89] found id: ""
	I1024 20:16:34.878167   49708 logs.go:284] 0 containers: []
	W1024 20:16:34.878175   49708 logs.go:286] No container was found matching "kindnet"
	I1024 20:16:34.878180   49708 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:16:34.878235   49708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:16:34.929288   49708 cri.go:89] found id: "0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.929321   49708 cri.go:89] found id: "94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:34.929328   49708 cri.go:89] found id: ""
	I1024 20:16:34.929338   49708 logs.go:284] 2 containers: [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3]
	I1024 20:16:34.929391   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.933731   49708 ssh_runner.go:195] Run: which crictl
	I1024 20:16:34.938300   49708 logs.go:123] Gathering logs for storage-provisioner [0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471] ...
	I1024 20:16:34.938326   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0198578b96c6d8755b0f6c909fbc07343f692992b695979acaac0dd0340f3471"
	I1024 20:16:34.980919   49708 logs.go:123] Gathering logs for storage-provisioner [94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3] ...
	I1024 20:16:34.980944   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c1196dd672c18bf9b946c56a2c19b49e0a68d8f99b64173b9118003fb3bcb3"
	I1024 20:16:35.021465   49708 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:16:35.021495   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:16:35.165907   49708 logs.go:123] Gathering logs for coredns [5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc] ...
	I1024 20:16:35.165935   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520a46163d9a24ef41e647f2ff70aa7a92c40f2d43dbacfcce39f85e4d823bc"
	I1024 20:16:35.212733   49708 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:16:35.212759   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:16:35.620344   49708 logs.go:123] Gathering logs for kube-apiserver [cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928] ...
	I1024 20:16:35.620395   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc891cea4cf91b8c2b31bcc2d4668f49b692acd8860dfc6f46f832667b3c3928"
	I1024 20:16:35.669555   49708 logs.go:123] Gathering logs for etcd [297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf] ...
	I1024 20:16:35.669588   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297b00416e9d42ae6631672a7c4af000833851409e247313da1d19d02bd148bf"
	I1024 20:16:35.720959   49708 logs.go:123] Gathering logs for kube-proxy [4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139] ...
	I1024 20:16:35.720987   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c95bbf4f285bb1d4c2d1cfa555efc1bd4460b463506606b0ec1fca3e728c139"
	I1024 20:16:35.762823   49708 logs.go:123] Gathering logs for kube-scheduler [742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591] ...
	I1024 20:16:35.762852   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 742064a59716b4ec2d1d6105f74deb73f4fe753f6e8e562ab72594246bf31591"
	I1024 20:16:35.805994   49708 logs.go:123] Gathering logs for kube-controller-manager [7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687] ...
	I1024 20:16:35.806021   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e5201f16577b79adbb09590150e2e734ab458917b3e522b272c2b9e53caf687"
	I1024 20:16:35.879019   49708 logs.go:123] Gathering logs for container status ...
	I1024 20:16:35.879046   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:16:35.941760   49708 logs.go:123] Gathering logs for kubelet ...
	I1024 20:16:35.941796   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 20:16:35.995475   49708 logs.go:123] Gathering logs for dmesg ...
	I1024 20:16:35.995515   49708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:16:38.526080   49708 system_pods.go:59] 8 kube-system pods found
	I1024 20:16:38.526106   49708 system_pods.go:61] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.526114   49708 system_pods.go:61] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.526122   49708 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.526128   49708 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.526133   49708 system_pods.go:61] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.526139   49708 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.526150   49708 system_pods.go:61] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.526159   49708 system_pods.go:61] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.526168   49708 system_pods.go:74] duration metric: took 4.003864797s to wait for pod list to return data ...
	I1024 20:16:38.526182   49708 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:16:38.528827   49708 default_sa.go:45] found service account: "default"
	I1024 20:16:38.528854   49708 default_sa.go:55] duration metric: took 2.662588ms for default service account to be created ...
	I1024 20:16:38.528863   49708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:16:38.534560   49708 system_pods.go:86] 8 kube-system pods found
	I1024 20:16:38.534579   49708 system_pods.go:89] "coredns-5dd5756b68-mklhw" [53629562-a50d-4ca5-80ab-baed4852b4d7] Running
	I1024 20:16:38.534585   49708 system_pods.go:89] "etcd-default-k8s-diff-port-643126" [1872e87b-f897-446d-9b5b-2f33aa762bb7] Running
	I1024 20:16:38.534589   49708 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643126" [031c08b2-73c6-4eea-ba0b-a2dda0bdebf3] Running
	I1024 20:16:38.534594   49708 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643126" [2d96b9f7-eb95-46a7-8e8f-bb9ea7b6bc8a] Running
	I1024 20:16:38.534598   49708 system_pods.go:89] "kube-proxy-x4zbh" [a47f6c48-c4de-4feb-a3ea-8874c980d263] Running
	I1024 20:16:38.534602   49708 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643126" [692f2ac4-9f23-4bce-924c-784464727cdd] Running
	I1024 20:16:38.534610   49708 system_pods.go:89] "metrics-server-57f55c9bc5-lmxdt" [9b235003-ac4a-491b-af2e-9af54e79922c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:16:38.534615   49708 system_pods.go:89] "storage-provisioner" [53920350-b0f4-4486-88a8-b97ed6c1cf17] Running
	I1024 20:16:38.534622   49708 system_pods.go:126] duration metric: took 5.753846ms to wait for k8s-apps to be running ...
	I1024 20:16:38.534630   49708 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:16:38.534668   49708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:16:38.549835   49708 system_svc.go:56] duration metric: took 15.197069ms WaitForService to wait for kubelet.
	I1024 20:16:38.549856   49708 kubeadm.go:581] duration metric: took 4m22.553994431s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:16:38.549878   49708 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:16:38.553043   49708 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:16:38.553065   49708 node_conditions.go:123] node cpu capacity is 2
	I1024 20:16:38.553076   49708 node_conditions.go:105] duration metric: took 3.193057ms to run NodePressure ...
	I1024 20:16:38.553086   49708 start.go:228] waiting for startup goroutines ...
	I1024 20:16:38.553091   49708 start.go:233] waiting for cluster config update ...
	I1024 20:16:38.553100   49708 start.go:242] writing updated cluster config ...
	I1024 20:16:38.553348   49708 ssh_runner.go:195] Run: rm -f paused
	I1024 20:16:38.601183   49708 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:16:38.603463   49708 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-643126" cluster and "default" namespace by default
	I1024 20:16:38.597848   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:40.599437   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:38.795772   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:41.293845   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.096749   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.097165   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:47.097443   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:43.298644   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:45.797003   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:49.097716   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:51.597754   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:48.295110   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:50.796361   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.600174   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:56.097860   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:53.295856   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:55.295890   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:57.795597   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:58.097890   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:00.598554   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:16:59.795830   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:02.295268   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:03.098362   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:05.596632   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:04.296575   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:06.296820   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.098450   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:10.597828   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:12.599199   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:08.795717   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:11.296662   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.097014   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.097844   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:13.794373   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:15.795134   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:17.795531   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.098039   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:21.098582   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:19.796588   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:22.296536   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:23.597792   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.098066   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:24.795501   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:26.796240   49071 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:27.488206   49071 pod_ready.go:81] duration metric: took 4m0.000518995s waiting for pod "metrics-server-57f55c9bc5-tsfvs" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:27.488255   49071 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:27.488267   49071 pod_ready.go:38] duration metric: took 4m4.400905907s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:27.488288   49071 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:17:27.488320   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:27.488379   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:27.544995   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:27.545022   49071 cri.go:89] found id: ""
	I1024 20:17:27.545033   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:27.545116   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.550068   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:27.550127   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:27.595184   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:27.595207   49071 cri.go:89] found id: ""
	I1024 20:17:27.595215   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:27.595265   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.600016   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:27.600075   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:27.644222   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:27.644254   49071 cri.go:89] found id: ""
	I1024 20:17:27.644265   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:27.644321   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.654982   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:27.655048   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:27.697751   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:27.697773   49071 cri.go:89] found id: ""
	I1024 20:17:27.697783   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:27.697838   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.701909   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:27.701969   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:27.746060   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:27.746085   49071 cri.go:89] found id: ""
	I1024 20:17:27.746094   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:27.746147   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.750335   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:27.750392   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:27.791948   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:27.791973   49071 cri.go:89] found id: ""
	I1024 20:17:27.791981   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:27.792045   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.796535   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:27.796616   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:27.839648   49071 cri.go:89] found id: ""
	I1024 20:17:27.839675   49071 logs.go:284] 0 containers: []
	W1024 20:17:27.839683   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:27.839689   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:27.839750   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:27.889284   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.889327   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:27.889334   49071 cri.go:89] found id: ""
	I1024 20:17:27.889343   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:27.889404   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.893661   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:27.897791   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:27.897819   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:27.941335   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:27.941369   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:27.954378   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:27.954409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:28.115760   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:28.115792   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:28.171378   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:28.171409   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:28.211591   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:28.211620   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:28.247491   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247676   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247811   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:28.247961   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:28.268681   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:28.268717   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:28.099979   50077 pod_ready.go:102] pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace has status "Ready":"False"
	I1024 20:17:28.791972   50077 pod_ready.go:81] duration metric: took 4m0.000695315s waiting for pod "metrics-server-74d5856cc6-ml25z" in "kube-system" namespace to be "Ready" ...
	E1024 20:17:28.792005   50077 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 20:17:28.792032   50077 pod_ready.go:38] duration metric: took 4m1.199949971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:17:28.792069   50077 kubeadm.go:640] restartCluster took 5m7.653001653s
	W1024 20:17:28.792133   50077 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 20:17:28.792173   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1024 20:17:28.321382   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:28.321413   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:28.364236   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:28.364260   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:28.840985   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:28.841028   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:28.896806   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:28.896846   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:28.948487   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:28.948520   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:28.993469   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:28.993500   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:29.052064   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052102   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:29.052154   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:29.052165   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052174   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052180   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:29.052186   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:29.052191   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:29.052196   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.598790   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.806587354s)
	I1024 20:17:33.598873   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:17:33.614594   50077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:17:33.625146   50077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:17:33.635420   50077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:17:33.635486   50077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 20:17:33.858680   50077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:17:39.053169   49071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:17:39.069883   49071 api_server.go:72] duration metric: took 4m23.373979574s to wait for apiserver process to appear ...
	I1024 20:17:39.069910   49071 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:17:39.069953   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:39.070015   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:39.116676   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:39.116696   49071 cri.go:89] found id: ""
	I1024 20:17:39.116703   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:39.116752   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.121745   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:39.121814   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:39.174897   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.174932   49071 cri.go:89] found id: ""
	I1024 20:17:39.174943   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:39.175002   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.180933   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:39.181003   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:39.239666   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.239691   49071 cri.go:89] found id: ""
	I1024 20:17:39.239701   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:39.239754   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.244270   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:39.244328   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:39.285405   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.285432   49071 cri.go:89] found id: ""
	I1024 20:17:39.285443   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:39.285503   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.290326   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:39.290393   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:39.330723   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:39.330751   49071 cri.go:89] found id: ""
	I1024 20:17:39.330761   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:39.330816   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.335850   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:39.335917   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:39.375354   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:39.375377   49071 cri.go:89] found id: ""
	I1024 20:17:39.375387   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:39.375449   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.380243   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:39.380313   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:39.424841   49071 cri.go:89] found id: ""
	I1024 20:17:39.424875   49071 logs.go:284] 0 containers: []
	W1024 20:17:39.424885   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:39.424892   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:39.424950   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:39.464134   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.464153   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:39.464160   49071 cri.go:89] found id: ""
	I1024 20:17:39.464168   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:39.464224   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.468810   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:39.473093   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:39.473128   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:39.507113   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507292   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507432   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:39.507588   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:39.530433   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:39.530479   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:39.666739   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:39.666765   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:39.710505   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:39.710538   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:39.749917   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:39.749946   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:39.799168   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:39.799196   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:39.846346   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:39.846377   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:40.273032   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:40.273065   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:40.320491   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:40.320521   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:40.378356   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:40.378395   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:40.421618   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:40.421647   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:40.466303   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:40.466334   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:40.478941   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:40.478966   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:40.544618   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544642   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:40.544694   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:40.544706   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544718   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544725   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:40.544733   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:40.544739   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:40.544747   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:46.481686   50077 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1024 20:17:46.481762   50077 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 20:17:46.481861   50077 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:17:46.482000   50077 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:17:46.482104   50077 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:17:46.482236   50077 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:17:46.482362   50077 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:17:46.482486   50077 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1024 20:17:46.482538   50077 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:17:46.484150   50077 out.go:204]   - Generating certificates and keys ...
	I1024 20:17:46.484246   50077 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 20:17:46.484315   50077 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 20:17:46.484402   50077 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 20:17:46.484509   50077 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 20:17:46.484603   50077 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 20:17:46.484689   50077 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 20:17:46.484778   50077 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 20:17:46.484870   50077 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 20:17:46.484972   50077 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 20:17:46.485069   50077 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 20:17:46.485123   50077 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 20:17:46.485200   50077 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:17:46.485263   50077 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:17:46.485343   50077 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:17:46.485430   50077 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:17:46.485503   50077 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:17:46.485590   50077 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:17:46.487065   50077 out.go:204]   - Booting up control plane ...
	I1024 20:17:46.487158   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:17:46.487219   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:17:46.487291   50077 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:17:46.487401   50077 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:17:46.487551   50077 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:17:46.487623   50077 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003664 seconds
	I1024 20:17:46.487756   50077 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:17:46.487882   50077 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:17:46.487940   50077 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:17:46.488123   50077 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-467375 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 20:17:46.488199   50077 kubeadm.go:322] [bootstrap-token] Using token: axp9sy.xsem3c8nzt72b18p
	I1024 20:17:46.490507   50077 out.go:204]   - Configuring RBAC rules ...
	I1024 20:17:46.490603   50077 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:17:46.490719   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:17:46.490832   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:17:46.490938   50077 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:17:46.491009   50077 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:17:46.491044   50077 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 20:17:46.491083   50077 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 20:17:46.491091   50077 kubeadm.go:322] 
	I1024 20:17:46.491151   50077 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 20:17:46.491163   50077 kubeadm.go:322] 
	I1024 20:17:46.491224   50077 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 20:17:46.491231   50077 kubeadm.go:322] 
	I1024 20:17:46.491260   50077 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 20:17:46.491346   50077 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:17:46.491409   50077 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:17:46.491419   50077 kubeadm.go:322] 
	I1024 20:17:46.491511   50077 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 20:17:46.491621   50077 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:17:46.491715   50077 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:17:46.491725   50077 kubeadm.go:322] 
	I1024 20:17:46.491829   50077 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1024 20:17:46.491929   50077 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 20:17:46.491937   50077 kubeadm.go:322] 
	I1024 20:17:46.492064   50077 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492249   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f \
	I1024 20:17:46.492292   50077 kubeadm.go:322]     --control-plane 	  
	I1024 20:17:46.492302   50077 kubeadm.go:322] 
	I1024 20:17:46.492423   50077 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:17:46.492435   50077 kubeadm.go:322] 
	I1024 20:17:46.492532   50077 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token axp9sy.xsem3c8nzt72b18p \
	I1024 20:17:46.492675   50077 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d65e719e9d73f088b074f9e25a77381ad036ffab74ebfe01134d84f60119dd3f 
	I1024 20:17:46.492686   50077 cni.go:84] Creating CNI manager for ""
	I1024 20:17:46.492694   50077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 20:17:46.494152   50077 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 20:17:46.495677   50077 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 20:17:46.510639   50077 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 20:17:46.539872   50077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:17:46.539933   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.539945   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=old-k8s-version-467375 minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:46.984338   50077 ops.go:34] apiserver oom_adj: -16
	I1024 20:17:46.984391   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.163022   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:47.798557   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.298499   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:48.798506   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.298076   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:49.798120   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.298504   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.298777   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:51.798477   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.298309   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:52.798243   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:50.546645   49071 api_server.go:253] Checking apiserver healthz at https://192.168.50.162:8443/healthz ...
	I1024 20:17:50.552245   49071 api_server.go:279] https://192.168.50.162:8443/healthz returned 200:
	ok
	I1024 20:17:50.553721   49071 api_server.go:141] control plane version: v1.28.3
	I1024 20:17:50.553747   49071 api_server.go:131] duration metric: took 11.483829454s to wait for apiserver health ...
	I1024 20:17:50.553757   49071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:17:50.553784   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 20:17:50.553844   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 20:17:50.594504   49071 cri.go:89] found id: "c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:50.594528   49071 cri.go:89] found id: ""
	I1024 20:17:50.594536   49071 logs.go:284] 1 containers: [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32]
	I1024 20:17:50.594586   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.598912   49071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 20:17:50.598963   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 20:17:50.644339   49071 cri.go:89] found id: "cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:50.644355   49071 cri.go:89] found id: ""
	I1024 20:17:50.644362   49071 logs.go:284] 1 containers: [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b]
	I1024 20:17:50.644406   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.649046   49071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 20:17:50.649099   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 20:17:50.688245   49071 cri.go:89] found id: "94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:50.688268   49071 cri.go:89] found id: ""
	I1024 20:17:50.688278   49071 logs.go:284] 1 containers: [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8]
	I1024 20:17:50.688330   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.692382   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 20:17:50.692429   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 20:17:50.736359   49071 cri.go:89] found id: "458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:50.736384   49071 cri.go:89] found id: ""
	I1024 20:17:50.736393   49071 logs.go:284] 1 containers: [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202]
	I1024 20:17:50.736451   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.741226   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 20:17:50.741287   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 20:17:50.797894   49071 cri.go:89] found id: "bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:50.797920   49071 cri.go:89] found id: ""
	I1024 20:17:50.797930   49071 logs.go:284] 1 containers: [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c]
	I1024 20:17:50.797997   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.802725   49071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 20:17:50.802781   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 20:17:50.851081   49071 cri.go:89] found id: "153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:50.851106   49071 cri.go:89] found id: ""
	I1024 20:17:50.851115   49071 logs.go:284] 1 containers: [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33]
	I1024 20:17:50.851166   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.855549   49071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 20:17:50.855600   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 20:17:50.909237   49071 cri.go:89] found id: ""
	I1024 20:17:50.909265   49071 logs.go:284] 0 containers: []
	W1024 20:17:50.909276   49071 logs.go:286] No container was found matching "kindnet"
	I1024 20:17:50.909283   49071 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 20:17:50.909355   49071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 20:17:50.958541   49071 cri.go:89] found id: "6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:50.958567   49071 cri.go:89] found id: "7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:50.958574   49071 cri.go:89] found id: ""
	I1024 20:17:50.958583   49071 logs.go:284] 2 containers: [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1]
	I1024 20:17:50.958638   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.962947   49071 ssh_runner.go:195] Run: which crictl
	I1024 20:17:50.967261   49071 logs.go:123] Gathering logs for describe nodes ...
	I1024 20:17:50.967283   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 20:17:51.087158   49071 logs.go:123] Gathering logs for kube-apiserver [c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32] ...
	I1024 20:17:51.087190   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c440cb516cdfb83dc56ce8978af643a2865464a2d4cf244d948af928bc402b32"
	I1024 20:17:51.144421   49071 logs.go:123] Gathering logs for etcd [cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b] ...
	I1024 20:17:51.144458   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb13ad95dea1afcc47f6b264c1eaa515f46c65d490a52cd4db8632f67fb6cd2b"
	I1024 20:17:51.200040   49071 logs.go:123] Gathering logs for kube-controller-manager [153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33] ...
	I1024 20:17:51.200072   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153d53cd79d89ac8a871e77f5abe6b0a7acdd1e7d1ec5799ea4253ec83106d33"
	I1024 20:17:51.255703   49071 logs.go:123] Gathering logs for CRI-O ...
	I1024 20:17:51.255740   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 20:17:51.683831   49071 logs.go:123] Gathering logs for coredns [94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8] ...
	I1024 20:17:51.683869   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94df20bf68998612e139c4db7df68edac57f0dd4d581aefeb7293457447565f8"
	I1024 20:17:51.726821   49071 logs.go:123] Gathering logs for kube-scheduler [458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202] ...
	I1024 20:17:51.726856   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 458ce37f1738a6aa6002b269244b2051226d167dda5d8ed285ea5214d40ad202"
	I1024 20:17:51.776977   49071 logs.go:123] Gathering logs for storage-provisioner [7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1] ...
	I1024 20:17:51.777006   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e817e194cdec0829154c02061530b72e75f1ccbdafdb939a8b6fd1d43c966c1"
	I1024 20:17:51.822826   49071 logs.go:123] Gathering logs for kubelet ...
	I1024 20:17:51.822861   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 20:17:51.873557   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.873838   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874063   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:51.874313   49071 logs.go:138] Found kubelet problem: Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:51.900648   49071 logs.go:123] Gathering logs for dmesg ...
	I1024 20:17:51.900690   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 20:17:51.916123   49071 logs.go:123] Gathering logs for storage-provisioner [6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2] ...
	I1024 20:17:51.916161   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d89cb6110d0ab2f28b78ae13bf32c22ecf1fb9c833664f8f014502ae7c457e2"
	I1024 20:17:51.960440   49071 logs.go:123] Gathering logs for container status ...
	I1024 20:17:51.960470   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 20:17:52.010020   49071 logs.go:123] Gathering logs for kube-proxy [bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c] ...
	I1024 20:17:52.010051   49071 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc751572f7c36a872aa3da69e8c174c1f002c92d9c5e181e080098d667e06d1c"
	I1024 20:17:52.051039   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051063   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 20:17:52.051113   49071 out.go:239] X Problems detected in kubelet:
	W1024 20:17:52.051127   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586538    1274 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051142   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.586589    1274 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051162   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: W1024 20:13:12.586999    1274 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	W1024 20:17:52.051173   49071 out.go:239]   Oct 24 20:13:12 no-preload-014826 kubelet[1274]: E1024 20:13:12.587021    1274 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-014826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-014826' and this object
	I1024 20:17:52.051183   49071 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:52.051190   49071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:53.298168   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:53.798546   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.298175   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:54.798534   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.298510   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:55.798562   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.297914   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:56.797930   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.298527   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:57.798493   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.298630   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:58.798550   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.298526   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:17:59.798537   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.298538   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:00.798072   50077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:18:01.014502   50077 kubeadm.go:1081] duration metric: took 14.474620601s to wait for elevateKubeSystemPrivileges.
	I1024 20:18:01.014547   50077 kubeadm.go:406] StartCluster complete in 5m39.9402605s
	I1024 20:18:01.014569   50077 settings.go:142] acquiring lock: {Name:mkfe64531ffbfddba0384789ad75de8c3abf0175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.014667   50077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 20:18:01.017210   50077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9023/kubeconfig: {Name:mk6a5bafc46e0d6a4cbf1c7fdc6dec3b59b464a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:18:01.017539   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:18:01.017574   50077 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:18:01.017659   50077 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017666   50077 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017677   50077 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-467375"
	W1024 20:18:01.017690   50077 addons.go:240] addon storage-provisioner should already be in state true
	I1024 20:18:01.017695   50077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-467375"
	I1024 20:18:01.017699   50077 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-467375"
	I1024 20:18:01.017718   50077 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-467375"
	W1024 20:18:01.017727   50077 addons.go:240] addon metrics-server should already be in state true
	I1024 20:18:01.017731   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017777   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.017816   50077 config.go:182] Loaded profile config "old-k8s-version-467375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 20:18:01.018053   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018088   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018111   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018122   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.018149   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.018257   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.036179   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1024 20:18:01.036834   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.037477   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.037504   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.037665   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I1024 20:18:01.037824   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I1024 20:18:01.037912   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.038074   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.038220   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038306   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.038850   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.038867   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039010   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.039021   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.039391   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039410   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.039925   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.039949   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.039974   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.040014   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.041243   50077 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-467375"
	W1024 20:18:01.041258   50077 addons.go:240] addon default-storageclass should already be in state true
	I1024 20:18:01.041277   50077 host.go:66] Checking if "old-k8s-version-467375" exists ...
	I1024 20:18:01.041611   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.041645   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.056254   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I1024 20:18:01.056888   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.057215   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I1024 20:18:01.057487   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.057502   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.057895   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.057956   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.058536   50077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17485-9023/.minikube/bin/docker-machine-driver-kvm2
	I1024 20:18:01.058574   50077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 20:18:01.058844   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.058857   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.058929   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I1024 20:18:01.059172   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.059288   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.059451   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.059964   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.059975   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.060353   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.060565   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.061107   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.062802   50077 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 20:18:01.064189   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 20:18:01.064209   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 20:18:01.064230   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.062154   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.066082   50077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:18:01.067046   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.067880   50077 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.067901   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:18:01.067921   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.068400   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.068432   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.069073   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.069343   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.069484   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.069587   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.071678   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072196   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.072220   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.072596   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.072776   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.072905   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.073043   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.079576   50077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I1024 20:18:01.080025   50077 main.go:141] libmachine: () Calling .GetVersion
	I1024 20:18:01.080592   50077 main.go:141] libmachine: Using API Version  1
	I1024 20:18:01.080613   50077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 20:18:01.081035   50077 main.go:141] libmachine: () Calling .GetMachineName
	I1024 20:18:01.081240   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetState
	I1024 20:18:01.083090   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .DriverName
	I1024 20:18:01.083404   50077 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.083425   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:18:01.083443   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHHostname
	I1024 20:18:01.086433   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.086802   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:42:97", ip: ""} in network mk-old-k8s-version-467375: {Iface:virbr1 ExpiryTime:2023-10-24 21:01:35 +0000 UTC Type:0 Mac:52:54:00:28:42:97 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:old-k8s-version-467375 Clientid:01:52:54:00:28:42:97}
	I1024 20:18:01.086824   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | domain old-k8s-version-467375 has defined IP address 192.168.39.71 and MAC address 52:54:00:28:42:97 in network mk-old-k8s-version-467375
	I1024 20:18:01.087003   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHPort
	I1024 20:18:01.087198   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHKeyPath
	I1024 20:18:01.087348   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .GetSSHUsername
	I1024 20:18:01.087506   50077 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/old-k8s-version-467375/id_rsa Username:docker}
	I1024 20:18:01.197205   50077 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-467375" context rescaled to 1 replicas
	I1024 20:18:01.197249   50077 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:18:01.200328   50077 out.go:177] * Verifying Kubernetes components...
	I1024 20:18:02.061971   49071 system_pods.go:59] 8 kube-system pods found
	I1024 20:18:02.062015   49071 system_pods.go:61] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.062024   49071 system_pods.go:61] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.062031   49071 system_pods.go:61] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.062040   49071 system_pods.go:61] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.062047   49071 system_pods.go:61] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.062054   49071 system_pods.go:61] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.062066   49071 system_pods.go:61] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.062078   49071 system_pods.go:61] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.062086   49071 system_pods.go:74] duration metric: took 11.508322005s to wait for pod list to return data ...
	I1024 20:18:02.062098   49071 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:02.065560   49071 default_sa.go:45] found service account: "default"
	I1024 20:18:02.065585   49071 default_sa.go:55] duration metric: took 3.476366ms for default service account to be created ...
	I1024 20:18:02.065595   49071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:02.073224   49071 system_pods.go:86] 8 kube-system pods found
	I1024 20:18:02.073253   49071 system_pods.go:89] "coredns-5dd5756b68-gnn8j" [f8f83c43-bf4a-452f-96c3-e968aa6cfd8b] Running
	I1024 20:18:02.073262   49071 system_pods.go:89] "etcd-no-preload-014826" [02a39d20-e22a-4f65-bd8c-2249ac5fea33] Running
	I1024 20:18:02.073269   49071 system_pods.go:89] "kube-apiserver-no-preload-014826" [66daea82-8f3b-45b6-bf76-1f32b7e38fd2] Running
	I1024 20:18:02.073277   49071 system_pods.go:89] "kube-controller-manager-no-preload-014826" [3c79db09-384f-44eb-8cc8-348e41b3505b] Running
	I1024 20:18:02.073284   49071 system_pods.go:89] "kube-proxy-hvphg" [9a9c3c47-456b-4aa9-bf59-882cc3d2f3f7] Running
	I1024 20:18:02.073290   49071 system_pods.go:89] "kube-scheduler-no-preload-014826" [2896a544-894a-4bc1-966e-8762507687ba] Running
	I1024 20:18:02.073313   49071 system_pods.go:89] "metrics-server-57f55c9bc5-tsfvs" [f601af0f-443c-445c-8198-259cf9015272] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:02.073326   49071 system_pods.go:89] "storage-provisioner" [323512c1-2555-419c-b128-47b945f9d24d] Running
	I1024 20:18:02.073335   49071 system_pods.go:126] duration metric: took 7.733883ms to wait for k8s-apps to be running ...
	I1024 20:18:02.073346   49071 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:18:02.073405   49071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:02.093085   49071 system_svc.go:56] duration metric: took 19.727658ms WaitForService to wait for kubelet.
	I1024 20:18:02.093113   49071 kubeadm.go:581] duration metric: took 4m46.397215509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:18:02.093135   49071 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:18:02.101982   49071 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:18:02.102007   49071 node_conditions.go:123] node cpu capacity is 2
	I1024 20:18:02.102018   49071 node_conditions.go:105] duration metric: took 8.878046ms to run NodePressure ...
	I1024 20:18:02.102035   49071 start.go:228] waiting for startup goroutines ...
	I1024 20:18:02.102041   49071 start.go:233] waiting for cluster config update ...
	I1024 20:18:02.102054   49071 start.go:242] writing updated cluster config ...
	I1024 20:18:02.102767   49071 ssh_runner.go:195] Run: rm -f paused
	I1024 20:18:02.159693   49071 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:18:02.161831   49071 out.go:177] * Done! kubectl is now configured to use "no-preload-014826" cluster and "default" namespace by default
	I1024 20:18:01.201778   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:18:01.315241   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:18:01.335753   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:18:01.339160   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 20:18:01.339182   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 20:18:01.376704   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 20:18:01.376726   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 20:18:01.385150   50077 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.385223   50077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 20:18:01.443957   50077 node_ready.go:49] node "old-k8s-version-467375" has status "Ready":"True"
	I1024 20:18:01.443978   50077 node_ready.go:38] duration metric: took 58.799937ms waiting for node "old-k8s-version-467375" to be "Ready" ...
	I1024 20:18:01.443987   50077 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:01.453968   50077 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:01.453998   50077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 20:18:01.481599   50077 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:01.543065   50077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 20:18:02.715998   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.400725332s)
	I1024 20:18:02.716049   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716062   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716066   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.38027937s)
	I1024 20:18:02.716103   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716120   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716152   50077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.330913087s)
	I1024 20:18:02.716170   50077 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 20:18:02.716377   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716392   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716402   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716410   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716512   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.716522   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716536   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.716547   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.716557   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.716623   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.716637   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.717532   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.717534   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.717554   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.790444   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.790480   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.790901   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.790925   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895176   50077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.352065096s)
	I1024 20:18:02.895243   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895268   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895611   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895630   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895634   50077 main.go:141] libmachine: (old-k8s-version-467375) DBG | Closing plugin on server side
	I1024 20:18:02.895639   50077 main.go:141] libmachine: Making call to close driver server
	I1024 20:18:02.895654   50077 main.go:141] libmachine: (old-k8s-version-467375) Calling .Close
	I1024 20:18:02.895875   50077 main.go:141] libmachine: Successfully made call to close driver server
	I1024 20:18:02.895888   50077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 20:18:02.895905   50077 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-467375"
	I1024 20:18:02.897664   50077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1024 20:18:02.899508   50077 addons.go:502] enable addons completed in 1.881940564s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1024 20:18:03.719917   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:06.207388   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:08.207967   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:10.708258   50077 pod_ready.go:102] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"False"
	I1024 20:18:12.208133   50077 pod_ready.go:92] pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.208155   50077 pod_ready.go:81] duration metric: took 10.726531733s waiting for pod "coredns-5644d7b6d9-nbmqt" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.208166   50077 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213213   50077 pod_ready.go:92] pod "kube-proxy-9bpht" in "kube-system" namespace has status "Ready":"True"
	I1024 20:18:12.213237   50077 pod_ready.go:81] duration metric: took 5.063943ms waiting for pod "kube-proxy-9bpht" in "kube-system" namespace to be "Ready" ...
	I1024 20:18:12.213247   50077 pod_ready.go:38] duration metric: took 10.769249135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:18:12.213267   50077 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:18:12.213344   50077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:18:12.228272   50077 api_server.go:72] duration metric: took 11.030986098s to wait for apiserver process to appear ...
	I1024 20:18:12.228295   50077 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:18:12.228313   50077 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1024 20:18:12.234663   50077 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1024 20:18:12.235584   50077 api_server.go:141] control plane version: v1.16.0
	I1024 20:18:12.235599   50077 api_server.go:131] duration metric: took 7.297294ms to wait for apiserver health ...
	I1024 20:18:12.235605   50077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:18:12.239203   50077 system_pods.go:59] 4 kube-system pods found
	I1024 20:18:12.239228   50077 system_pods.go:61] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.239235   50077 system_pods.go:61] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.239246   50077 system_pods.go:61] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.239292   50077 system_pods.go:61] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.239307   50077 system_pods.go:74] duration metric: took 3.696523ms to wait for pod list to return data ...
	I1024 20:18:12.239315   50077 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:18:12.242065   50077 default_sa.go:45] found service account: "default"
	I1024 20:18:12.242080   50077 default_sa.go:55] duration metric: took 2.760528ms for default service account to be created ...
	I1024 20:18:12.242086   50077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:18:12.245602   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.245624   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.245631   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.245640   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.245648   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.245664   50077 retry.go:31] will retry after 287.935783ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.538837   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.538900   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.538924   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.538942   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.538955   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.538979   50077 retry.go:31] will retry after 320.680304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:12.864800   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:12.864826   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:12.864832   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:12.864838   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:12.864844   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:12.864858   50077 retry.go:31] will retry after 364.04425ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.233903   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.233927   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.233934   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.233941   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.233946   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.233974   50077 retry.go:31] will retry after 559.821457ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:13.799208   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:13.799234   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:13.799240   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:13.799246   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:13.799252   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:13.799266   50077 retry.go:31] will retry after 522.263157ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.325735   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.325767   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.325776   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.325789   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.325799   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.325817   50077 retry.go:31] will retry after 668.137602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:14.999589   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:14.999614   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:14.999620   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:14.999626   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:14.999632   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:14.999646   50077 retry.go:31] will retry after 859.983274ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:15.865531   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:15.865556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:15.865561   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:15.865568   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:15.865573   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:15.865589   50077 retry.go:31] will retry after 1.238765858s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:17.109999   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:17.110023   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:17.110028   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:17.110035   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:17.110041   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:17.110054   50077 retry.go:31] will retry after 1.485428629s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:18.600612   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:18.600635   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:18.600641   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:18.600647   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:18.600652   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:18.600665   50077 retry.go:31] will retry after 2.290652681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:20.897529   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:20.897556   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:20.897562   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:20.897571   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:20.897577   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:20.897593   50077 retry.go:31] will retry after 2.367552906s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:23.270766   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:23.270792   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:23.270800   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:23.270810   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:23.270817   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:23.270834   50077 retry.go:31] will retry after 2.861357376s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:26.136663   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:26.136696   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:26.136704   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:26.136715   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:26.136725   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:26.136743   50077 retry.go:31] will retry after 3.526737387s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:29.670148   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:29.670175   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:29.670181   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:29.670188   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:29.670195   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:29.670215   50077 retry.go:31] will retry after 5.450931485s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:35.125964   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:35.125989   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:35.125994   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:35.126001   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:35.126007   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:35.126022   50077 retry.go:31] will retry after 5.914408322s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:41.046649   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:41.046670   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:41.046677   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:41.046684   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:41.046690   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:41.046704   50077 retry.go:31] will retry after 6.748980526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:47.802189   50077 system_pods.go:86] 4 kube-system pods found
	I1024 20:18:47.802212   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:47.802217   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:47.802225   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:47.802230   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:47.802244   50077 retry.go:31] will retry after 8.662562452s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 20:18:56.471025   50077 system_pods.go:86] 7 kube-system pods found
	I1024 20:18:56.471062   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:18:56.471071   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:18:56.471079   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:18:56.471086   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:18:56.471094   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Pending
	I1024 20:18:56.471108   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:18:56.471121   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:18:56.471142   50077 retry.go:31] will retry after 10.356793998s: missing components: etcd, kube-scheduler
	I1024 20:19:06.834711   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:06.834741   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:06.834749   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Pending
	I1024 20:19:06.834755   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:06.834762   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:06.834767   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:06.834772   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:06.834782   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:06.834792   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:06.834809   50077 retry.go:31] will retry after 14.609583217s: missing components: etcd
	I1024 20:19:21.450651   50077 system_pods.go:86] 8 kube-system pods found
	I1024 20:19:21.450678   50077 system_pods.go:89] "coredns-5644d7b6d9-nbmqt" [60dab487-6a1c-4223-9a74-be06f2331625] Running
	I1024 20:19:21.450685   50077 system_pods.go:89] "etcd-old-k8s-version-467375" [8e194c9a-b258-4488-9fda-24b681d09d8d] Running
	I1024 20:19:21.450689   50077 system_pods.go:89] "kube-apiserver-old-k8s-version-467375" [ce17991d-bbfd-4cb1-ae79-f356140008f9] Running
	I1024 20:19:21.450693   50077 system_pods.go:89] "kube-controller-manager-old-k8s-version-467375" [2d1c6b20-4c6e-477c-bcd1-8a6180977587] Running
	I1024 20:19:21.450699   50077 system_pods.go:89] "kube-proxy-9bpht" [ed713982-614e-41c9-a305-5e1841aab7d2] Running
	I1024 20:19:21.450709   50077 system_pods.go:89] "kube-scheduler-old-k8s-version-467375" [0bc8f0ae-ad99-432f-b149-b3d2a4661fd1] Running
	I1024 20:19:21.450719   50077 system_pods.go:89] "metrics-server-74d5856cc6-b5qcv" [7499edec-6098-4ce6-b70b-6c3336fa692f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:19:21.450732   50077 system_pods.go:89] "storage-provisioner" [9941fc4f-34d2-41d8-887e-93bfd845b574] Running
	I1024 20:19:21.450745   50077 system_pods.go:126] duration metric: took 1m9.20865321s to wait for k8s-apps to be running ...
	I1024 20:19:21.450757   50077 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:19:21.450800   50077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:19:21.468030   50077 system_svc.go:56] duration metric: took 17.254248ms WaitForService to wait for kubelet.
	I1024 20:19:21.468061   50077 kubeadm.go:581] duration metric: took 1m20.270780436s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:19:21.468089   50077 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:19:21.471958   50077 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:19:21.471982   50077 node_conditions.go:123] node cpu capacity is 2
	I1024 20:19:21.471993   50077 node_conditions.go:105] duration metric: took 3.898893ms to run NodePressure ...
	I1024 20:19:21.472003   50077 start.go:228] waiting for startup goroutines ...
	I1024 20:19:21.472008   50077 start.go:233] waiting for cluster config update ...
	I1024 20:19:21.472018   50077 start.go:242] writing updated cluster config ...
	I1024 20:19:21.472257   50077 ssh_runner.go:195] Run: rm -f paused
	I1024 20:19:21.520082   50077 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1024 20:19:21.522545   50077 out.go:177] 
	W1024 20:19:21.524125   50077 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1024 20:19:21.525515   50077 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1024 20:19:21.527113   50077 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-467375" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-10-24 20:11:58 UTC, ends at Tue 2023-10-24 20:30:57 UTC. --
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.102594502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179457102573794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=5283ab58-0f2a-49c6-95bf-4df02ea27f85 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.103192414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f7ae0fe-75e1-4e57-ac35-41f8d35098ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.103242614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f7ae0fe-75e1-4e57-ac35-41f8d35098ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.103461456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f7ae0fe-75e1-4e57-ac35-41f8d35098ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.149679904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f828187-e804-4f85-907c-b03ae194f7fc name=/runtime.v1.RuntimeService/Version
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.149739705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f828187-e804-4f85-907c-b03ae194f7fc name=/runtime.v1.RuntimeService/Version
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.151575019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e046a147-c131-4246-a26a-215ca8c5f567 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.151968391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179457151957393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=e046a147-c131-4246-a26a-215ca8c5f567 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.152632000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7c480aa-a539-46eb-ad27-5bda69914c60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.152678766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7c480aa-a539-46eb-ad27-5bda69914c60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.152860284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7c480aa-a539-46eb-ad27-5bda69914c60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.193267750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=26ddf317-bdcd-4341-b3d3-43b9eea62858 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.193322905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=26ddf317-bdcd-4341-b3d3-43b9eea62858 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.194752391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1ed76077-b7bc-4847-a983-bcf653ee5cba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.195172854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179457195154315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=1ed76077-b7bc-4847-a983-bcf653ee5cba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.195756035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4be10634-2b45-4bc2-bb04-cc2735d71462 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.195835603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4be10634-2b45-4bc2-bb04-cc2735d71462 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.196008025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4be10634-2b45-4bc2-bb04-cc2735d71462 name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.231229021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8d92a07d-dbb6-494d-a655-8f4da6e5cfa8 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.231316222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8d92a07d-dbb6-494d-a655-8f4da6e5cfa8 name=/runtime.v1.RuntimeService/Version
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.232471087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e81681b9-5ca1-4dba-a90e-471a3faa69ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.232975191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1698179457232957177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=e81681b9-5ca1-4dba-a90e-471a3faa69ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.233707963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e3368bc-87e7-4b09-8b76-49f893ab51fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.233779759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e3368bc-87e7-4b09-8b76-49f893ab51fd name=/runtime.v1.RuntimeService/ListContainers
	Oct 24 20:30:57 old-k8s-version-467375 crio[713]: time="2023-10-24 20:30:57.233957068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f,PodSandboxId:0dd3c0060f763986335f788e173b33ba65e31ffcc49f3ce4f1ac5c757bf5823e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1698178683553085708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9941fc4f-34d2-41d8-887e-93bfd845b574,},Annotations:map[string]string{io.kubernetes.container.hash: b65eb62b,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87,PodSandboxId:8fb53b434c655cca38f640f57b12a0d1f28721a87b7051816841502bacebac2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1698178683424491699,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9bpht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed713982-614e-41c9-a305-5e1841aab7d2,},Annotations:map[string]string{io.kubernetes.container.hash: 52301ca8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167,PodSandboxId:6f437dfde8ea005b06f7a2f5b6f9c086168133bc1afa6c0b7100a230288127b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1698178682274046208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbmqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60dab487-6a1c-4223-9a74-be06f2331625,},Annotations:map[string]string{io.kubernetes.container.hash: c29ec159,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a,PodSandboxId:306e6f6f7cd3444e3a4b27d5e4fed3a3fe44666719322cb0a75ad324c4002630,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1698178657505242356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e07935c110f777397416fb6e544a55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 15009e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582,PodSandboxId:b90660ec9923f84f67ceead419faa4c84997f02e352abd2815c48e3e55b600c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1698178656531858553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087,PodSandboxId:a6549fc51cf2bd282dde8d52054ddff84c8235f4551ba3341385f9deabfe8532,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1698178656038381617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8,PodSandboxId:68b686c2126cb7b154d7f588600684325818516747670ce660bc3e6b56305f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1698178655844958759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-467375,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bc15f9a1e3d6b08274d552bb9acdea0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: ec6507f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e3368bc-87e7-4b09-8b76-49f893ab51fd name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15cfea4cc862a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   0dd3c0060f763       storage-provisioner
	2d460706afb1a       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   8fb53b434c655       kube-proxy-9bpht
	f3befe9d41186       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   6f437dfde8ea0       coredns-5644d7b6d9-nbmqt
	850ec0f2b7ba0       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   306e6f6f7cd34       etcd-old-k8s-version-467375
	53d38854e1b72       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   b90660ec9923f       kube-scheduler-old-k8s-version-467375
	3bf056c13d767       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   a6549fc51cf2b       kube-controller-manager-old-k8s-version-467375
	6116ab191670e       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   68b686c2126cb       kube-apiserver-old-k8s-version-467375
	
	* 
	* ==> coredns [f3befe9d41186c52d2fd0cbe24e6e412502a31fd64323e303e11cdc850b29167] <==
	* .:53
	2023-10-24T20:18:02.676Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-24T20:18:02.676Z [INFO] CoreDNS-1.6.2
	2023-10-24T20:18:02.676Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-24T20:18:29.106Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-467375
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-467375
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=old-k8s-version-467375
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_17_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:17:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:30:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:30:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:30:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:30:41 +0000   Tue, 24 Oct 2023 20:17:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    old-k8s-version-467375
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 cf177f680a9a4008b36f2fe5fe7a9338
	 System UUID:                cf177f68-0a9a-4008-b36f-2fe5fe7a9338
	 Boot ID:                    1c9add44-c102-4a36-9938-ce862bd11598
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-nbmqt                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-467375                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-467375             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-467375    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-9bpht                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-467375             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-b5qcv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 13m                kubelet, old-k8s-version-467375     Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet, old-k8s-version-467375     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-467375     Node old-k8s-version-467375 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-467375     Node old-k8s-version-467375 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-467375     Node old-k8s-version-467375 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-467375  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct24 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072882] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.641488] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.471187] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141222] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.508089] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct24 20:12] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.151952] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.165011] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.145338] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.261366] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +20.463140] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.479303] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.134895] kauditd_printk_skb: 13 callbacks suppressed
	[Oct24 20:13] kauditd_printk_skb: 4 callbacks suppressed
	[Oct24 20:17] systemd-fstab-generator[3168]: Ignoring "noauto" for root device
	[  +0.736063] kauditd_printk_skb: 8 callbacks suppressed
	[Oct24 20:18] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [850ec0f2b7ba0a12abc50c0249882b2894837d785fc4cd6bdcfb2d6a023b6e5a] <==
	* 2023-10-24 20:17:37.646887 I | raft: newRaft 226d7ac4e2309206 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-24 20:17:37.646912 I | raft: 226d7ac4e2309206 became follower at term 1
	2023-10-24 20:17:37.654879 W | auth: simple token is not cryptographically signed
	2023-10-24 20:17:37.661258 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-24 20:17:37.663183 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 20:17:37.663426 I | embed: listening for metrics on http://192.168.39.71:2381
	2023-10-24 20:17:37.664137 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-24 20:17:37.665162 I | etcdserver/membership: added member 226d7ac4e2309206 [https://192.168.39.71:2380] to cluster 98fbf1e9ed6d9a6e
	2023-10-24 20:17:37.665321 I | etcdserver: 226d7ac4e2309206 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-24 20:17:38.047436 I | raft: 226d7ac4e2309206 is starting a new election at term 1
	2023-10-24 20:17:38.047628 I | raft: 226d7ac4e2309206 became candidate at term 2
	2023-10-24 20:17:38.047731 I | raft: 226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 2
	2023-10-24 20:17:38.047762 I | raft: 226d7ac4e2309206 became leader at term 2
	2023-10-24 20:17:38.047779 I | raft: raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 2
	2023-10-24 20:17:38.048110 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-24 20:17:38.049466 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-24 20:17:38.049590 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-24 20:17:38.049618 I | etcdserver: published {Name:old-k8s-version-467375 ClientURLs:[https://192.168.39.71:2379]} to cluster 98fbf1e9ed6d9a6e
	2023-10-24 20:17:38.049850 I | embed: ready to serve client requests
	2023-10-24 20:17:38.049935 I | embed: ready to serve client requests
	2023-10-24 20:17:38.051342 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-24 20:17:38.053123 I | embed: serving client requests on 192.168.39.71:2379
	2023-10-24 20:18:02.422690 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (100.444844ms) to execute
	2023-10-24 20:27:38.072143 I | mvcc: store.index: compact 660
	2023-10-24 20:27:38.074442 I | mvcc: finished scheduled compaction at 660 (took 1.821789ms)
	
	* 
	* ==> kernel <==
	*  20:30:57 up 19 min,  0 users,  load average: 0.71, 0.28, 0.22
	Linux old-k8s-version-467375 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6116ab191670e7b565264bfc41b1631776726bd1036b20cf34cc6700b709d7e8] <==
	* I1024 20:23:42.465817       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:23:42.466064       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:23:42.466124       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:23:42.466150       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:25:42.466780       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:25:42.466938       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:25:42.467017       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:25:42.467025       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:27:42.468014       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:27:42.468149       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:27:42.468252       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:27:42.468262       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:28:42.468714       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:28:42.468815       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:28:42.468873       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:28:42.468896       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 20:30:42.469640       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 20:30:42.469987       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 20:30:42.470094       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 20:30:42.470122       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3bf056c13d767f95188318d77e512053638a457924525f0625d09740e6ead087] <==
	* E1024 20:24:34.671297       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:24:57.422881       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:25:04.923320       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:25:29.424898       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:25:35.175390       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:26:01.427390       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:26:05.428339       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:26:33.429927       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:26:35.680484       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:27:05.432699       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:27:05.938681       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1024 20:27:36.190838       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:27:37.434845       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:28:06.443052       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:28:09.437027       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:28:36.695382       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:28:41.439473       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:29:06.947856       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:29:13.441966       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:29:37.199924       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:29:45.443872       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:30:07.452658       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:30:17.445725       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:30:37.705307       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:30:49.447787       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [2d460706afb1a63d29b784f6dccefc3f8436a7e3e30f77c0504564c591528a87] <==
	* W1024 20:18:03.692313       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1024 20:18:03.714817       1 node.go:135] Successfully retrieved node IP: 192.168.39.71
	I1024 20:18:03.714904       1 server_others.go:149] Using iptables Proxier.
	I1024 20:18:03.716451       1 server.go:529] Version: v1.16.0
	I1024 20:18:03.720208       1 config.go:313] Starting service config controller
	I1024 20:18:03.720270       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1024 20:18:03.720303       1 config.go:131] Starting endpoints config controller
	I1024 20:18:03.720409       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1024 20:18:03.820721       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1024 20:18:03.821019       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [53d38854e1b720b631aff201fbe7600cacd87a505d1e2a94ec09a3fec249c582] <==
	* W1024 20:17:41.458577       1 authentication.go:79] Authentication is disabled
	I1024 20:17:41.458656       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1024 20:17:41.459207       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1024 20:17:41.503456       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 20:17:41.503698       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 20:17:41.503797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 20:17:41.503881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 20:17:41.503957       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:41.516470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 20:17:41.544943       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 20:17:41.545130       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:41.558105       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 20:17:41.558407       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 20:17:41.558730       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 20:17:42.506433       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 20:17:42.511970       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 20:17:42.538247       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 20:17:42.553234       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 20:17:42.553343       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:42.553925       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 20:17:42.554155       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 20:17:42.559312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 20:17:42.559596       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 20:17:42.560260       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 20:17:42.562913       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 20:11:58 UTC, ends at Tue 2023-10-24 20:30:57 UTC. --
	Oct 24 20:26:26 old-k8s-version-467375 kubelet[3174]: E1024 20:26:26.806249    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:26:40 old-k8s-version-467375 kubelet[3174]: E1024 20:26:40.806357    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:26:55 old-k8s-version-467375 kubelet[3174]: E1024 20:26:55.806159    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:09 old-k8s-version-467375 kubelet[3174]: E1024 20:27:09.806706    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:23 old-k8s-version-467375 kubelet[3174]: E1024 20:27:23.806193    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:34 old-k8s-version-467375 kubelet[3174]: E1024 20:27:34.806634    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:27:34 old-k8s-version-467375 kubelet[3174]: E1024 20:27:34.907272    3174 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 24 20:27:48 old-k8s-version-467375 kubelet[3174]: E1024 20:27:48.806401    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:01 old-k8s-version-467375 kubelet[3174]: E1024 20:28:01.806469    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:13 old-k8s-version-467375 kubelet[3174]: E1024 20:28:13.806155    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:24 old-k8s-version-467375 kubelet[3174]: E1024 20:28:24.806801    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:38 old-k8s-version-467375 kubelet[3174]: E1024 20:28:38.806451    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:28:53 old-k8s-version-467375 kubelet[3174]: E1024 20:28:53.824129    3174 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:28:53 old-k8s-version-467375 kubelet[3174]: E1024 20:28:53.824266    3174 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:28:53 old-k8s-version-467375 kubelet[3174]: E1024 20:28:53.824330    3174 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 24 20:28:53 old-k8s-version-467375 kubelet[3174]: E1024 20:28:53.824362    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 24 20:29:06 old-k8s-version-467375 kubelet[3174]: E1024 20:29:06.810938    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:29:17 old-k8s-version-467375 kubelet[3174]: E1024 20:29:17.806275    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:29:28 old-k8s-version-467375 kubelet[3174]: E1024 20:29:28.806218    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:29:43 old-k8s-version-467375 kubelet[3174]: E1024 20:29:43.806651    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:29:57 old-k8s-version-467375 kubelet[3174]: E1024 20:29:57.806555    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:30:12 old-k8s-version-467375 kubelet[3174]: E1024 20:30:12.806396    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:30:26 old-k8s-version-467375 kubelet[3174]: E1024 20:30:26.806363    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:30:39 old-k8s-version-467375 kubelet[3174]: E1024 20:30:39.806445    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:30:53 old-k8s-version-467375 kubelet[3174]: E1024 20:30:53.806948    3174 pod_workers.go:191] Error syncing pod 7499edec-6098-4ce6-b70b-6c3336fa692f ("metrics-server-74d5856cc6-b5qcv_kube-system(7499edec-6098-4ce6-b70b-6c3336fa692f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [15cfea4cc862a2fa28d852aa206aa8cff0b5f94827f6ef972bf1caea394e169f] <==
	* I1024 20:18:03.712749       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 20:18:03.734431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 20:18:03.734586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 20:18:03.742147       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 20:18:03.744014       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24fd7826-13bb-4292-aeda-a867c165a3ad", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-467375_0c370f91-25ec-4144-b971-8091d45e365c became leader
	I1024 20:18:03.744082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-467375_0c370f91-25ec-4144-b971-8091d45e365c!
	I1024 20:18:03.844586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-467375_0c370f91-25ec-4144-b971-8091d45e365c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467375 -n old-k8s-version-467375
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-467375 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-b5qcv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-467375 describe pod metrics-server-74d5856cc6-b5qcv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-467375 describe pod metrics-server-74d5856cc6-b5qcv: exit status 1 (75.415867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-b5qcv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-467375 describe pod metrics-server-74d5856cc6-b5qcv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (153.85s)

                                                
                                    

Test pass (229/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.47
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.3/json-events 4.63
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.56
20 TestOffline 103.45
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
25 TestAddons/Setup 145.54
27 TestAddons/parallel/Registry 15.8
30 TestAddons/parallel/MetricsServer 5.86
31 TestAddons/parallel/HelmTiller 11.69
33 TestAddons/parallel/CSI 86.8
34 TestAddons/parallel/Headlamp 13.36
35 TestAddons/parallel/CloudSpanner 5.7
36 TestAddons/parallel/LocalPath 54.85
37 TestAddons/parallel/NvidiaDevicePlugin 5.59
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestCertOptions 49.14
43 TestCertExpiration 321.08
45 TestForceSystemdFlag 77.08
46 TestForceSystemdEnv 73.42
48 TestKVMDriverInstallOrUpdate 1.65
52 TestErrorSpam/setup 48.02
53 TestErrorSpam/start 0.37
54 TestErrorSpam/status 0.76
55 TestErrorSpam/pause 1.59
56 TestErrorSpam/unpause 1.75
57 TestErrorSpam/stop 2.25
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 98.42
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 35.44
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.09
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
69 TestFunctional/serial/CacheCmd/cache/add_local 1.28
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 36.47
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.51
80 TestFunctional/serial/LogsFileCmd 1.55
81 TestFunctional/serial/InvalidService 4.83
83 TestFunctional/parallel/ConfigCmd 0.41
84 TestFunctional/parallel/DashboardCmd 15.52
85 TestFunctional/parallel/DryRun 0.28
86 TestFunctional/parallel/InternationalLanguage 0.14
87 TestFunctional/parallel/StatusCmd 0.89
91 TestFunctional/parallel/ServiceCmdConnect 28.58
92 TestFunctional/parallel/AddonsCmd 0.14
93 TestFunctional/parallel/PersistentVolumeClaim 51.73
95 TestFunctional/parallel/SSHCmd 0.45
96 TestFunctional/parallel/CpCmd 1.16
97 TestFunctional/parallel/MySQL 30.72
98 TestFunctional/parallel/FileSync 0.25
99 TestFunctional/parallel/CertSync 1.7
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
107 TestFunctional/parallel/License 0.17
108 TestFunctional/parallel/Version/short 0.06
109 TestFunctional/parallel/Version/components 1.08
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.37
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.42
114 TestFunctional/parallel/ImageCommands/ImageBuild 4.32
115 TestFunctional/parallel/ImageCommands/Setup 1.04
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.47
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.94
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.46
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.12
132 TestFunctional/parallel/ImageCommands/ImageRemove 1.06
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.7
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.36
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
136 TestFunctional/parallel/ProfileCmd/profile_list 0.32
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.57
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
139 TestFunctional/parallel/MountCmd/any-port 8.04
140 TestFunctional/parallel/ServiceCmd/List 1.37
141 TestFunctional/parallel/ServiceCmd/JSONOutput 1.41
142 TestFunctional/parallel/MountCmd/specific-port 2.13
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
144 TestFunctional/parallel/ServiceCmd/Format 0.43
145 TestFunctional/parallel/ServiceCmd/URL 0.43
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.01
149 TestFunctional/delete_minikube_cached_images 0.01
153 TestIngressAddonLegacy/StartLegacyK8sCluster 103.68
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.51
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
160 TestJSONOutput/start/Command 100.19
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.68
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.63
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 23.13
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.22
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 102.72
192 TestMountStart/serial/StartWithMountFirst 27.09
193 TestMountStart/serial/VerifyMountFirst 0.41
194 TestMountStart/serial/StartWithMountSecond 27.47
195 TestMountStart/serial/VerifyMountSecond 0.41
196 TestMountStart/serial/DeleteFirst 0.67
197 TestMountStart/serial/VerifyMountPostDelete 0.41
198 TestMountStart/serial/Stop 1.16
199 TestMountStart/serial/RestartStopped 25.92
200 TestMountStart/serial/VerifyMountPostStop 0.41
203 TestMultiNode/serial/FreshStart2Nodes 109.95
204 TestMultiNode/serial/DeployApp2Nodes 4.41
206 TestMultiNode/serial/AddNode 45.02
207 TestMultiNode/serial/ProfileList 0.23
208 TestMultiNode/serial/CopyFile 7.7
209 TestMultiNode/serial/StopNode 3
210 TestMultiNode/serial/StartAfterStop 28.63
212 TestMultiNode/serial/DeleteNode 1.74
214 TestMultiNode/serial/RestartMultiNode 444.22
215 TestMultiNode/serial/ValidateNameConflict 49.56
222 TestScheduledStopUnix 116.59
228 TestKubernetesUpgrade 208.54
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
232 TestNoKubernetes/serial/StartWithK8s 100.59
233 TestNoKubernetes/serial/StartWithStopK8s 35.27
234 TestStoppedBinaryUpgrade/Setup 0.44
236 TestNoKubernetes/serial/Start 28.71
237 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
238 TestNoKubernetes/serial/ProfileList 1.13
239 TestNoKubernetes/serial/Stop 1.87
240 TestNoKubernetes/serial/StartNoArgs 26.05
248 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
256 TestNetworkPlugins/group/false 4.08
261 TestPause/serial/Start 112.72
264 TestStartStop/group/old-k8s-version/serial/FirstStart 355.75
266 TestStartStop/group/no-preload/serial/FirstStart 158.41
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.41
269 TestStartStop/group/embed-certs/serial/FirstStart 152.72
271 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.49
272 TestStartStop/group/no-preload/serial/DeployApp 8.5
273 TestStartStop/group/embed-certs/serial/DeployApp 10.51
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
276 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
278 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.46
279 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
283 TestStartStop/group/no-preload/serial/SecondStart 694.28
284 TestStartStop/group/embed-certs/serial/SecondStart 578.55
285 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
289 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 515.45
291 TestStartStop/group/old-k8s-version/serial/SecondStart 589.03
301 TestStartStop/group/newest-cni/serial/FirstStart 61.71
302 TestNetworkPlugins/group/auto/Start 115.22
303 TestNetworkPlugins/group/kindnet/Start 100.67
304 TestStartStop/group/newest-cni/serial/DeployApp 0
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.93
306 TestStartStop/group/newest-cni/serial/Stop 12.47
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
308 TestStartStop/group/newest-cni/serial/SecondStart 58.75
309 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
310 TestNetworkPlugins/group/auto/KubeletFlags 0.23
311 TestNetworkPlugins/group/auto/NetCatPod 13.39
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
313 TestNetworkPlugins/group/kindnet/NetCatPod 12.49
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
317 TestStartStop/group/newest-cni/serial/Pause 2.79
318 TestNetworkPlugins/group/calico/Start 97.67
319 TestNetworkPlugins/group/auto/DNS 0.18
320 TestNetworkPlugins/group/auto/Localhost 0.16
321 TestNetworkPlugins/group/auto/HairPin 0.16
322 TestNetworkPlugins/group/kindnet/DNS 0.22
323 TestNetworkPlugins/group/kindnet/Localhost 0.19
324 TestNetworkPlugins/group/kindnet/HairPin 0.17
325 TestNetworkPlugins/group/custom-flannel/Start 97.46
326 TestNetworkPlugins/group/enable-default-cni/Start 144.7
327 TestNetworkPlugins/group/flannel/Start 153.24
328 TestNetworkPlugins/group/calico/ControllerPod 5.04
329 TestNetworkPlugins/group/calico/KubeletFlags 0.24
330 TestNetworkPlugins/group/calico/NetCatPod 12.35
331 TestNetworkPlugins/group/calico/DNS 0.2
332 TestNetworkPlugins/group/calico/Localhost 0.19
333 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
334 TestNetworkPlugins/group/calico/HairPin 0.2
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.45
336 TestNetworkPlugins/group/custom-flannel/DNS 0.21
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
339 TestNetworkPlugins/group/bridge/Start 62.48
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.34
342 TestNetworkPlugins/group/flannel/ControllerPod 5.03
343 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
344 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
345 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
347 TestNetworkPlugins/group/flannel/NetCatPod 12.42
348 TestNetworkPlugins/group/flannel/DNS 0.24
349 TestNetworkPlugins/group/flannel/Localhost 0.21
350 TestNetworkPlugins/group/flannel/HairPin 0.16
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
352 TestNetworkPlugins/group/bridge/NetCatPod 12.38
353 TestNetworkPlugins/group/bridge/DNS 26.41
354 TestNetworkPlugins/group/bridge/Localhost 0.15
355 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (9.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-645515 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-645515 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.471663485s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-645515
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-645515: exit status 85 (71.37171ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-645515        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:38.338943   16310 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:38.339201   16310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:38.339211   16310 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:38.339216   16310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:38.339430   16310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	W1024 19:00:38.339577   16310 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-9023/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-9023/.minikube/config/config.json: no such file or directory
	I1024 19:00:38.340220   16310 out.go:303] Setting JSON to true
	I1024 19:00:38.341050   16310 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2336,"bootTime":1698171702,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:38.341112   16310 start.go:138] virtualization: kvm guest
	I1024 19:00:38.343758   16310 out.go:97] [download-only-645515] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:38.345316   16310 out.go:169] MINIKUBE_LOCATION=17485
	W1024 19:00:38.343885   16310 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball: no such file or directory
	I1024 19:00:38.343951   16310 notify.go:220] Checking for updates...
	I1024 19:00:38.348056   16310 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:38.349527   16310 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:00:38.350909   16310 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:38.352234   16310 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1024 19:00:38.354944   16310 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:00:38.355179   16310 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:00:38.459852   16310 out.go:97] Using the kvm2 driver based on user configuration
	I1024 19:00:38.459878   16310 start.go:298] selected driver: kvm2
	I1024 19:00:38.459884   16310 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:00:38.460180   16310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:38.460318   16310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9023/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:00:38.474276   16310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:00:38.474332   16310 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:00:38.474777   16310 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1024 19:00:38.474943   16310 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1024 19:00:38.475008   16310 cni.go:84] Creating CNI manager for ""
	I1024 19:00:38.475024   16310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1024 19:00:38.475033   16310 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1024 19:00:38.475041   16310 start_flags.go:323] config:
	{Name:download-only-645515 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-645515 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:38.475234   16310 iso.go:125] acquiring lock: {Name:mkc407ecfb654b1cd3059d4101c7525e7b1bf26d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:38.477334   16310 out.go:97] Downloading VM boot image ...
	I1024 19:00:38.477371   16310 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:00:41.423990   16310 out.go:97] Starting control plane node download-only-645515 in cluster download-only-645515
	I1024 19:00:41.424019   16310 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:00:41.445688   16310 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:41.445735   16310 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:41.445921   16310 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:00:41.447874   16310 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1024 19:00:41.447903   16310 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:00:41.473117   16310 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17485-9023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-645515"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (4.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-645515 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-645515 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.627874884s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (4.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-645515
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-645515: exit status 85 (70.313665ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-645515        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-645515 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-645515        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:47.881104   16369 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:47.881218   16369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:47.881229   16369 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:47.881236   16369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:47.881443   16369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	W1024 19:00:47.881574   16369 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-9023/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-9023/.minikube/config/config.json: no such file or directory
	I1024 19:00:47.881984   16369 out.go:303] Setting JSON to true
	I1024 19:00:47.882854   16369 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2346,"bootTime":1698171702,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:47.882913   16369 start.go:138] virtualization: kvm guest
	I1024 19:00:47.885081   16369 out.go:97] [download-only-645515] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:47.886586   16369 out.go:169] MINIKUBE_LOCATION=17485
	I1024 19:00:47.885239   16369 notify.go:220] Checking for updates...
	I1024 19:00:47.890005   16369 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:47.891515   16369 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:00:47.892968   16369 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:00:47.894379   16369 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-645515"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-645515
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-397693 --alsologtostderr --binary-mirror http://127.0.0.1:36043 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-397693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-397693
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (103.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-787603 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-787603 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m42.59320779s)
helpers_test.go:175: Cleaning up "offline-crio-787603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-787603
--- PASS: TestOffline (103.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-866342
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-866342: exit status 85 (73.99846ms)

                                                
                                                
-- stdout --
	* Profile "addons-866342" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-866342"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-866342
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-866342: exit status 85 (76.580656ms)

                                                
                                                
-- stdout --
	* Profile "addons-866342" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-866342"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (145.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-866342 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-866342 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.537335327s)
--- PASS: TestAddons/Setup (145.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 27.222968ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9fjkv" [16c9f9e1-0151-4045-bb71-6e31267e58df] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016673439s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8jqwg" [bd54e9d3-a6ec-43ec-910e-38ddb0de2574] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016248388s
addons_test.go:339: (dbg) Run:  kubectl --context addons-866342 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-866342 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-866342 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.785861724s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 ip
2023/10/24 19:03:34 [DEBUG] GET http://192.168.39.163:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 11.187197ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-r2sdc" [216942df-99c1-4c92-b8bd-f0594dbb6894] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.02437053s
addons_test.go:414: (dbg) Run:  kubectl --context addons-866342 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.416634ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mzrhm" [3653bdf1-8b0f-4839-abe0-48a7faadeb74] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.021360667s
addons_test.go:472: (dbg) Run:  kubectl --context addons-866342 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-866342 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.310441109s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (86.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 28.387434ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-866342 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-866342 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1e98504f-6675-4033-9b68-c3d929787096] Pending
helpers_test.go:344: "task-pv-pod" [1e98504f-6675-4033-9b68-c3d929787096] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1e98504f-6675-4033-9b68-c3d929787096] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.048461465s
addons_test.go:583: (dbg) Run:  kubectl --context addons-866342 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-866342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-866342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-866342 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-866342 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-866342 delete pod task-pv-pod: (1.010303113s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-866342 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-866342 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-866342 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [62c7fbe2-93ab-4cf5-b90c-b3cffec5d1e7] Pending
helpers_test.go:344: "task-pv-pod-restore" [62c7fbe2-93ab-4cf5-b90c-b3cffec5d1e7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [62c7fbe2-93ab-4cf5-b90c-b3cffec5d1e7] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.019935819s
addons_test.go:625: (dbg) Run:  kubectl --context addons-866342 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-866342 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-866342 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-866342 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80409379s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (86.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-866342 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-866342 --alsologtostderr -v=1: (1.312447065s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-28p64" [4ac99cca-4bd3-4726-92f0-0a693caf1c3d] Pending
helpers_test.go:344: "headlamp-94b766c-28p64" [4ac99cca-4bd3-4726-92f0-0a693caf1c3d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-28p64" [4ac99cca-4bd3-4726-92f0-0a693caf1c3d] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.042687077s
--- PASS: TestAddons/parallel/Headlamp (13.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-fw2zv" [e9e59b5e-7212-431f-a2f9-74bb2d34eaa8] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014394672s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-866342
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-866342 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-866342 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [aeed462f-d7d8-4ded-9d7a-cf4ed906a886] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [aeed462f-d7d8-4ded-9d7a-cf4ed906a886] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [aeed462f-d7d8-4ded-9d7a-cf4ed906a886] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.01122792s
addons_test.go:890: (dbg) Run:  kubectl --context addons-866342 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 ssh "cat /opt/local-path-provisioner/pvc-36d1a6de-39d6-4c81-a7f0-3bf4da62b74d_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-866342 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-866342 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-866342 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-866342 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.262563204s)
--- PASS: TestAddons/parallel/LocalPath (54.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kcrfw" [56d67427-465c-406a-a425-3ded489815e8] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.025971253s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-866342
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-866342 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-866342 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (49.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-116938 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-116938 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.59452683s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-116938 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-116938 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-116938 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-116938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-116938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-116938: (1.042837314s)
--- PASS: TestCertOptions (49.14s)

                                                
                                    
x
+
TestCertExpiration (321.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-051222 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1024 19:58:19.104245   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-051222 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m21.4190789s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-051222 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E1024 20:03:02.154489   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 20:03:10.558737   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 20:03:19.104823   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-051222 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (58.633517596s)
helpers_test.go:175: Cleaning up "cert-expiration-051222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-051222
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-051222: (1.024581284s)
--- PASS: TestCertExpiration (321.08s)

                                                
                                    
x
+
TestForceSystemdFlag (77.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-569251 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-569251 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.842076031s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-569251 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-569251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-569251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-569251: (1.017207695s)
--- PASS: TestForceSystemdFlag (77.08s)

                                                
                                    
x
+
TestForceSystemdEnv (73.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-912715 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-912715 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.626320886s)
helpers_test.go:175: Cleaning up "force-systemd-env-912715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-912715
--- PASS: TestForceSystemdEnv (73.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.65s)

                                                
                                    
x
+
TestErrorSpam/setup (48.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-994944 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-994944 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-994944 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-994944 --driver=kvm2  --container-runtime=crio: (48.023234024s)
--- PASS: TestErrorSpam/setup (48.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 stop: (2.094914501s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-994944 --log_dir /tmp/nospam-994944 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17485-9023/.minikube/files/etc/test/nested/copy/16298/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853597 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-853597 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.41964628s)
--- PASS: TestFunctional/serial/StartWithProxy (98.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853597 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-853597 --alsologtostderr -v=8: (35.440812621s)
functional_test.go:659: soft start took 35.441421164s for "functional-853597" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-853597 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 cache add registry.k8s.io/pause:3.1: (1.021499861s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 cache add registry.k8s.io/pause:3.3: (1.053994055s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 cache add registry.k8s.io/pause:latest: (1.243071262s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-853597 /tmp/TestFunctionalserialCacheCmdcacheadd_local1478712845/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cache add minikube-local-cache-test:functional-853597
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cache delete minikube-local-cache-test:functional-853597
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-853597
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (238.299322ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 kubectl -- --context functional-853597 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-853597 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853597 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-853597 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.471293986s)
functional_test.go:757: restart took 36.471435837s for "functional-853597" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-853597 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 logs: (1.50970318s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 logs --file /tmp/TestFunctionalserialLogsFileCmd1010025263/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 logs --file /tmp/TestFunctionalserialLogsFileCmd1010025263/001/logs.txt: (1.545684663s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-853597 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-853597
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-853597: exit status 115 (314.675742ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.249:31258 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-853597 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-853597 delete -f testdata/invalidsvc.yaml: (1.2112605s)
--- PASS: TestFunctional/serial/InvalidService (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 config get cpus: exit status 14 (66.497761ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 config get cpus: exit status 14 (63.790354ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-853597 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-853597 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23978: os: process already finished
E1024 19:14:00.066873   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (15.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853597 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-853597 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.197103ms)

                                                
                                                
-- stdout --
	* [functional-853597] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:13:44.148198   23887 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:13:44.148332   23887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:13:44.148341   23887 out.go:309] Setting ErrFile to fd 2...
	I1024 19:13:44.148346   23887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:13:44.148499   23887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:13:44.149037   23887 out.go:303] Setting JSON to false
	I1024 19:13:44.149961   23887 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3122,"bootTime":1698171702,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:13:44.150027   23887 start.go:138] virtualization: kvm guest
	I1024 19:13:44.151614   23887 out.go:177] * [functional-853597] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:13:44.153254   23887 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:13:44.153206   23887 notify.go:220] Checking for updates...
	I1024 19:13:44.154281   23887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:13:44.155574   23887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:13:44.157449   23887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:13:44.158692   23887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:13:44.160086   23887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:13:44.161768   23887 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:13:44.162132   23887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:13:44.162174   23887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:13:44.176994   23887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34095
	I1024 19:13:44.177399   23887 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:13:44.178022   23887 main.go:141] libmachine: Using API Version  1
	I1024 19:13:44.178039   23887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:13:44.178400   23887 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:13:44.178589   23887 main.go:141] libmachine: (functional-853597) Calling .DriverName
	I1024 19:13:44.178810   23887 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:13:44.179086   23887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:13:44.179118   23887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:13:44.192981   23887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I1024 19:13:44.193376   23887 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:13:44.193846   23887 main.go:141] libmachine: Using API Version  1
	I1024 19:13:44.193867   23887 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:13:44.194190   23887 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:13:44.194364   23887 main.go:141] libmachine: (functional-853597) Calling .DriverName
	I1024 19:13:44.225335   23887 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:13:44.226703   23887 start.go:298] selected driver: kvm2
	I1024 19:13:44.226719   23887 start.go:902] validating driver "kvm2" against &{Name:functional-853597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-853597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.249 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:13:44.226867   23887 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:13:44.229406   23887 out.go:177] 
	W1024 19:13:44.230709   23887 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1024 19:13:44.231995   23887 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853597 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-853597 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-853597 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.524463ms)

                                                
                                                
-- stdout --
	* [functional-853597] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:13:44.009201   23859 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:13:44.009309   23859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:13:44.009319   23859 out.go:309] Setting ErrFile to fd 2...
	I1024 19:13:44.009326   23859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:13:44.009613   23859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:13:44.010135   23859 out.go:303] Setting JSON to false
	I1024 19:13:44.010972   23859 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3122,"bootTime":1698171702,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:13:44.011029   23859 start.go:138] virtualization: kvm guest
	I1024 19:13:44.013346   23859 out.go:177] * [functional-853597] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1024 19:13:44.014784   23859 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:13:44.014834   23859 notify.go:220] Checking for updates...
	I1024 19:13:44.017453   23859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:13:44.018860   23859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:13:44.020130   23859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:13:44.021392   23859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:13:44.022628   23859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:13:44.024328   23859 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:13:44.024685   23859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:13:44.024732   23859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:13:44.039119   23859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1024 19:13:44.039472   23859 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:13:44.039931   23859 main.go:141] libmachine: Using API Version  1
	I1024 19:13:44.039952   23859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:13:44.040301   23859 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:13:44.040464   23859 main.go:141] libmachine: (functional-853597) Calling .DriverName
	I1024 19:13:44.040678   23859 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:13:44.041007   23859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:13:44.041048   23859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:13:44.054402   23859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40709
	I1024 19:13:44.054801   23859 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:13:44.055291   23859 main.go:141] libmachine: Using API Version  1
	I1024 19:13:44.055311   23859 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:13:44.055620   23859 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:13:44.055807   23859 main.go:141] libmachine: (functional-853597) Calling .DriverName
	I1024 19:13:44.085771   23859 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1024 19:13:44.086878   23859 start.go:298] selected driver: kvm2
	I1024 19:13:44.086896   23859 start.go:902] validating driver "kvm2" against &{Name:functional-853597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-853597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.249 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:13:44.087008   23859 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:13:44.089068   23859 out.go:177] 
	W1024 19:13:44.090300   23859 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1024 19:13:44.091594   23859 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-853597 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-853597 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-m4b8x" [0c7713cb-e82b-40d2-aa02-fb5461414d84] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-m4b8x" [0c7713cb-e82b-40d2-aa02-fb5461414d84] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 28.014445116s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.249:30923
functional_test.go:1674: http://192.168.39.249:30923: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-m4b8x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.249:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.249:30923
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1b524758-ec5e-4a64-a3c9-9f50f609232a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012774609s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-853597 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-853597 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-853597 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-853597 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-853597 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [01c5eab3-eef2-4cf7-b8c9-2cc07776895a] Pending
E1024 19:13:19.104296   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.110295   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.120554   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.140896   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.181211   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.261521   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.422322   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:13:19.743481   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [01c5eab3-eef2-4cf7-b8c9-2cc07776895a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1024 19:13:20.383849   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [01c5eab3-eef2-4cf7-b8c9-2cc07776895a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.036933519s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-853597 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-853597 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-853597 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5b574f0-d057-43bc-93ac-68d96bf222f1] Pending
helpers_test.go:344: "sp-pod" [e5b574f0-d057-43bc-93ac-68d96bf222f1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5b574f0-d057-43bc-93ac-68d96bf222f1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.042140981s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-853597 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh -n functional-853597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 cp functional-853597:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2871175983/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh -n functional-853597 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-853597 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-77w2r" [ed2a5fe6-0c50-4e57-a4b1-ddaa656bd57e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-77w2r" [ed2a5fe6-0c50-4e57-a4b1-ddaa656bd57e] Running
E1024 19:13:29.345353   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.052616799s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;": exit status 1 (518.35678ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;": exit status 1 (640.345964ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;": exit status 1 (276.232027ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1024 19:13:39.586155   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-853597 exec mysql-859648c796-77w2r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16298/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /etc/test/nested/copy/16298/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16298.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /etc/ssl/certs/16298.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16298.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /usr/share/ca-certificates/16298.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/162982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /etc/ssl/certs/162982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/162982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /usr/share/ca-certificates/162982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-853597 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh "sudo systemctl is-active docker": exit status 1 (256.466732ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh "sudo systemctl is-active containerd": exit status 1 (249.053136ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 version -o=json --components: (1.075646857s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853597 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-853597
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-853597
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853597 image ls --format short --alsologtostderr:
I1024 19:13:53.667092   24742 out.go:296] Setting OutFile to fd 1 ...
I1024 19:13:53.667205   24742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:53.667215   24742 out.go:309] Setting ErrFile to fd 2...
I1024 19:13:53.667223   24742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:53.667434   24742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
I1024 19:13:53.667997   24742 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:53.668112   24742 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:53.668469   24742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:53.668529   24742 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:53.685783   24742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
I1024 19:13:53.686313   24742 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:53.687050   24742 main.go:141] libmachine: Using API Version  1
I1024 19:13:53.687082   24742 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:53.687453   24742 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:53.687756   24742 main.go:141] libmachine: (functional-853597) Calling .GetState
I1024 19:13:53.689780   24742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:53.689815   24742 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:53.706083   24742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
I1024 19:13:53.706454   24742 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:53.706943   24742 main.go:141] libmachine: Using API Version  1
I1024 19:13:53.706968   24742 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:53.707305   24742 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:53.707508   24742 main.go:141] libmachine: (functional-853597) Calling .DriverName
I1024 19:13:53.707732   24742 ssh_runner.go:195] Run: systemctl --version
I1024 19:13:53.707753   24742 main.go:141] libmachine: (functional-853597) Calling .GetSSHHostname
I1024 19:13:53.710848   24742 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:53.711323   24742 main.go:141] libmachine: (functional-853597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c7:c6", ip: ""} in network mk-functional-853597: {Iface:virbr1 ExpiryTime:2023-10-24 20:10:20 +0000 UTC Type:0 Mac:52:54:00:5b:c7:c6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-853597 Clientid:01:52:54:00:5b:c7:c6}
I1024 19:13:53.711348   24742 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined IP address 192.168.39.249 and MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:53.711523   24742 main.go:141] libmachine: (functional-853597) Calling .GetSSHPort
I1024 19:13:53.711747   24742 main.go:141] libmachine: (functional-853597) Calling .GetSSHKeyPath
I1024 19:13:53.711894   24742 main.go:141] libmachine: (functional-853597) Calling .GetSSHUsername
I1024 19:13:53.712036   24742 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/functional-853597/id_rsa Username:docker}
I1024 19:13:53.890744   24742 ssh_runner.go:195] Run: sudo crictl images --output json
I1024 19:13:54.006979   24742 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.006994   24742 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.007244   24742 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:54.007270   24742 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.007286   24742 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:13:54.007303   24742 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.007313   24742 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.007547   24742 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:54.007583   24742 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.007592   24742 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853597 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | bc649bab30d15 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| docker.io/library/mysql                 | 5.7                | 3b85be0b10d38 | 601MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-853597  | ec2ea38bbe861 | 3.35kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-853597  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853597 image ls --format table --alsologtostderr:
I1024 19:13:54.546370   24877 out.go:296] Setting OutFile to fd 1 ...
I1024 19:13:54.546611   24877 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:54.546620   24877 out.go:309] Setting ErrFile to fd 2...
I1024 19:13:54.546626   24877 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:54.546816   24877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
I1024 19:13:54.547349   24877 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:54.547463   24877 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:54.547850   24877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:54.547900   24877 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:54.562607   24877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
I1024 19:13:54.563101   24877 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:54.563769   24877 main.go:141] libmachine: Using API Version  1
I1024 19:13:54.563803   24877 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:54.564238   24877 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:54.564459   24877 main.go:141] libmachine: (functional-853597) Calling .GetState
I1024 19:13:54.566259   24877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:54.566300   24877 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:54.580664   24877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
I1024 19:13:54.581094   24877 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:54.581586   24877 main.go:141] libmachine: Using API Version  1
I1024 19:13:54.581607   24877 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:54.581943   24877 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:54.582221   24877 main.go:141] libmachine: (functional-853597) Calling .DriverName
I1024 19:13:54.582458   24877 ssh_runner.go:195] Run: systemctl --version
I1024 19:13:54.582486   24877 main.go:141] libmachine: (functional-853597) Calling .GetSSHHostname
I1024 19:13:54.585505   24877 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:54.585935   24877 main.go:141] libmachine: (functional-853597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c7:c6", ip: ""} in network mk-functional-853597: {Iface:virbr1 ExpiryTime:2023-10-24 20:10:20 +0000 UTC Type:0 Mac:52:54:00:5b:c7:c6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-853597 Clientid:01:52:54:00:5b:c7:c6}
I1024 19:13:54.585982   24877 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined IP address 192.168.39.249 and MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:54.586085   24877 main.go:141] libmachine: (functional-853597) Calling .GetSSHPort
I1024 19:13:54.586260   24877 main.go:141] libmachine: (functional-853597) Calling .GetSSHKeyPath
I1024 19:13:54.586419   24877 main.go:141] libmachine: (functional-853597) Calling .GetSSHUsername
I1024 19:13:54.586577   24877 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/functional-853597/id_rsa Username:docker}
I1024 19:13:54.730248   24877 ssh_runner.go:195] Run: sudo crictl images --output json
I1024 19:13:54.840878   24877 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.840902   24877 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.841259   24877 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.841287   24877 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:13:54.841306   24877 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.841322   24877 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.841567   24877 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:54.841628   24877 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.841651   24877 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853597 image ls --format json --alsologtostderr:
[{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073
122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glib
c"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8
"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":["docker.io/library/mysql@sha256:188121394576d05aedb5daf229403bf58d4ee16e04e81828e4d43b
72bd227bc2","docker.io/library/mysql@sha256:4f9bfb0f7dd97739ceedb546b381534bb11e9b4abf013d6ad9ae6473fed66099"],"repoTags":["docker.io/library/mysql:5.7"],"size":"600824773"},{"id":"bc649bab30d150c10a84031a7f54c99a8c31069c7bc324a7899d7125d59cc973","repoDigests":["docker.io/library/nginx@sha256:3a12fc354e3c4dd62196a809e52a5d2f8f385b52fcc62145b0efec5954bb8fa1","docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595"],"repoTags":["docker.io/library/nginx:latest"],"size":"190917887"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-853597"],"size":"34114467"},{"id":"ec2ea38bbe8616ebf01fab844c996756bf1fd598804b19d97172821131130260","repoDigests":["localhost/minikube-local-cache-test@sha256:dd4295a268506634b6a1c6dc84c41881c04f4b6a66a467fd11c1bda0f53dbcd0"],"repoTa
gs":["localhost/minikube-local-cache-test:functional-853597"],"size":"3345"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b
6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853597 image ls --format json --alsologtostderr:
I1024 19:13:54.186905   24830 out.go:296] Setting OutFile to fd 1 ...
I1024 19:13:54.187084   24830 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:54.187093   24830 out.go:309] Setting ErrFile to fd 2...
I1024 19:13:54.187098   24830 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:54.187333   24830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
I1024 19:13:54.188165   24830 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:54.188331   24830 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:54.188878   24830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:54.188934   24830 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:54.202667   24830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
I1024 19:13:54.203134   24830 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:54.203808   24830 main.go:141] libmachine: Using API Version  1
I1024 19:13:54.203836   24830 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:54.204187   24830 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:54.204353   24830 main.go:141] libmachine: (functional-853597) Calling .GetState
I1024 19:13:54.206182   24830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:54.206231   24830 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:54.219910   24830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33273
I1024 19:13:54.220373   24830 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:54.220860   24830 main.go:141] libmachine: Using API Version  1
I1024 19:13:54.220881   24830 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:54.221165   24830 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:54.221343   24830 main.go:141] libmachine: (functional-853597) Calling .DriverName
I1024 19:13:54.221528   24830 ssh_runner.go:195] Run: systemctl --version
I1024 19:13:54.221551   24830 main.go:141] libmachine: (functional-853597) Calling .GetSSHHostname
I1024 19:13:54.224344   24830 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:54.224716   24830 main.go:141] libmachine: (functional-853597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c7:c6", ip: ""} in network mk-functional-853597: {Iface:virbr1 ExpiryTime:2023-10-24 20:10:20 +0000 UTC Type:0 Mac:52:54:00:5b:c7:c6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-853597 Clientid:01:52:54:00:5b:c7:c6}
I1024 19:13:54.224746   24830 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined IP address 192.168.39.249 and MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:54.224875   24830 main.go:141] libmachine: (functional-853597) Calling .GetSSHPort
I1024 19:13:54.225053   24830 main.go:141] libmachine: (functional-853597) Calling .GetSSHKeyPath
I1024 19:13:54.225194   24830 main.go:141] libmachine: (functional-853597) Calling .GetSSHUsername
I1024 19:13:54.225344   24830 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/functional-853597/id_rsa Username:docker}
I1024 19:13:54.378339   24830 ssh_runner.go:195] Run: sudo crictl images --output json
I1024 19:13:54.473705   24830 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.473723   24830 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.473982   24830 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.474002   24830 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:13:54.474029   24830 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.474037   24830 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:54.474040   24830 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.474263   24830 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.474280   24830 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853597 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests:
- docker.io/library/mysql@sha256:188121394576d05aedb5daf229403bf58d4ee16e04e81828e4d43b72bd227bc2
- docker.io/library/mysql@sha256:4f9bfb0f7dd97739ceedb546b381534bb11e9b4abf013d6ad9ae6473fed66099
repoTags:
- docker.io/library/mysql:5.7
size: "600824773"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: bc649bab30d150c10a84031a7f54c99a8c31069c7bc324a7899d7125d59cc973
repoDigests:
- docker.io/library/nginx@sha256:3a12fc354e3c4dd62196a809e52a5d2f8f385b52fcc62145b0efec5954bb8fa1
- docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595
repoTags:
- docker.io/library/nginx:latest
size: "190917887"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-853597
size: "34114467"
- id: ec2ea38bbe8616ebf01fab844c996756bf1fd598804b19d97172821131130260
repoDigests:
- localhost/minikube-local-cache-test@sha256:dd4295a268506634b6a1c6dc84c41881c04f4b6a66a467fd11c1bda0f53dbcd0
repoTags:
- localhost/minikube-local-cache-test:functional-853597
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853597 image ls --format yaml --alsologtostderr:
I1024 19:13:53.763614   24777 out.go:296] Setting OutFile to fd 1 ...
I1024 19:13:53.763861   24777 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:53.763870   24777 out.go:309] Setting ErrFile to fd 2...
I1024 19:13:53.763877   24777 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:53.764076   24777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
I1024 19:13:53.764643   24777 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:53.764760   24777 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:53.765166   24777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:53.765221   24777 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:53.781229   24777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37303
I1024 19:13:53.781742   24777 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:53.782334   24777 main.go:141] libmachine: Using API Version  1
I1024 19:13:53.782356   24777 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:53.782698   24777 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:53.782904   24777 main.go:141] libmachine: (functional-853597) Calling .GetState
I1024 19:13:53.784836   24777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:53.784892   24777 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:53.799484   24777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
I1024 19:13:53.799873   24777 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:53.800387   24777 main.go:141] libmachine: Using API Version  1
I1024 19:13:53.800410   24777 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:53.800764   24777 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:53.800931   24777 main.go:141] libmachine: (functional-853597) Calling .DriverName
I1024 19:13:53.801138   24777 ssh_runner.go:195] Run: systemctl --version
I1024 19:13:53.801167   24777 main.go:141] libmachine: (functional-853597) Calling .GetSSHHostname
I1024 19:13:53.803872   24777 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:53.804214   24777 main.go:141] libmachine: (functional-853597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c7:c6", ip: ""} in network mk-functional-853597: {Iface:virbr1 ExpiryTime:2023-10-24 20:10:20 +0000 UTC Type:0 Mac:52:54:00:5b:c7:c6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-853597 Clientid:01:52:54:00:5b:c7:c6}
I1024 19:13:53.804244   24777 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined IP address 192.168.39.249 and MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:53.804315   24777 main.go:141] libmachine: (functional-853597) Calling .GetSSHPort
I1024 19:13:53.804517   24777 main.go:141] libmachine: (functional-853597) Calling .GetSSHKeyPath
I1024 19:13:53.804681   24777 main.go:141] libmachine: (functional-853597) Calling .GetSSHUsername
I1024 19:13:53.804818   24777 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/functional-853597/id_rsa Username:docker}
I1024 19:13:53.962636   24777 ssh_runner.go:195] Run: sudo crictl images --output json
I1024 19:13:54.116205   24777 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.116216   24777 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.116474   24777 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:54.116485   24777 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.116509   24777 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:13:54.116525   24777 main.go:141] libmachine: Making call to close driver server
I1024 19:13:54.116534   24777 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:54.116738   24777 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:54.116751   24777 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:54.116765   24777 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh pgrep buildkitd: exit status 1 (264.097723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image build -t localhost/my-image:functional-853597 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image build -t localhost/my-image:functional-853597 testdata/build --alsologtostderr: (3.808690042s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-853597 image build -t localhost/my-image:functional-853597 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 336d93d25d3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-853597
--> 5d5b9407074
Successfully tagged localhost/my-image:functional-853597
5d5b9407074e502713f973915558d294bc8acd3b1beffa403008cb9b8c10478a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-853597 image build -t localhost/my-image:functional-853597 testdata/build --alsologtostderr:
I1024 19:13:54.333670   24854 out.go:296] Setting OutFile to fd 1 ...
I1024 19:13:54.333843   24854 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:54.333853   24854 out.go:309] Setting ErrFile to fd 2...
I1024 19:13:54.333861   24854 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:13:54.334070   24854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
I1024 19:13:54.334661   24854 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:54.335187   24854 config.go:182] Loaded profile config "functional-853597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:13:54.335544   24854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:54.335586   24854 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:54.350173   24854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
I1024 19:13:54.350587   24854 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:54.351188   24854 main.go:141] libmachine: Using API Version  1
I1024 19:13:54.351213   24854 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:54.351549   24854 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:54.351724   24854 main.go:141] libmachine: (functional-853597) Calling .GetState
I1024 19:13:54.353435   24854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1024 19:13:54.353484   24854 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:13:54.367652   24854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
I1024 19:13:54.368060   24854 main.go:141] libmachine: () Calling .GetVersion
I1024 19:13:54.368535   24854 main.go:141] libmachine: Using API Version  1
I1024 19:13:54.368551   24854 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:13:54.368920   24854 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:13:54.369117   24854 main.go:141] libmachine: (functional-853597) Calling .DriverName
I1024 19:13:54.369331   24854 ssh_runner.go:195] Run: systemctl --version
I1024 19:13:54.369356   24854 main.go:141] libmachine: (functional-853597) Calling .GetSSHHostname
I1024 19:13:54.372011   24854 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:54.372397   24854 main.go:141] libmachine: (functional-853597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c7:c6", ip: ""} in network mk-functional-853597: {Iface:virbr1 ExpiryTime:2023-10-24 20:10:20 +0000 UTC Type:0 Mac:52:54:00:5b:c7:c6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-853597 Clientid:01:52:54:00:5b:c7:c6}
I1024 19:13:54.372428   24854 main.go:141] libmachine: (functional-853597) DBG | domain functional-853597 has defined IP address 192.168.39.249 and MAC address 52:54:00:5b:c7:c6 in network mk-functional-853597
I1024 19:13:54.372584   24854 main.go:141] libmachine: (functional-853597) Calling .GetSSHPort
I1024 19:13:54.372781   24854 main.go:141] libmachine: (functional-853597) Calling .GetSSHKeyPath
I1024 19:13:54.372941   24854 main.go:141] libmachine: (functional-853597) Calling .GetSSHUsername
I1024 19:13:54.373079   24854 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/functional-853597/id_rsa Username:docker}
I1024 19:13:54.503094   24854 build_images.go:151] Building image from path: /tmp/build.473592142.tar
I1024 19:13:54.503163   24854 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1024 19:13:54.522184   24854 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.473592142.tar
I1024 19:13:54.530575   24854 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.473592142.tar: stat -c "%s %y" /var/lib/minikube/build/build.473592142.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.473592142.tar': No such file or directory
I1024 19:13:54.530629   24854 ssh_runner.go:362] scp /tmp/build.473592142.tar --> /var/lib/minikube/build/build.473592142.tar (3072 bytes)
I1024 19:13:54.581632   24854 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.473592142
I1024 19:13:54.614203   24854 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.473592142 -xf /var/lib/minikube/build/build.473592142.tar
I1024 19:13:54.649561   24854 crio.go:297] Building image: /var/lib/minikube/build/build.473592142
I1024 19:13:54.649633   24854 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-853597 /var/lib/minikube/build/build.473592142 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1024 19:13:58.061525   24854 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-853597 /var/lib/minikube/build/build.473592142 --cgroup-manager=cgroupfs: (3.411866946s)
I1024 19:13:58.061582   24854 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.473592142
I1024 19:13:58.070829   24854 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.473592142.tar
I1024 19:13:58.080691   24854 build_images.go:207] Built localhost/my-image:functional-853597 from /tmp/build.473592142.tar
I1024 19:13:58.080717   24854 build_images.go:123] succeeded building to: functional-853597
I1024 19:13:58.080721   24854 build_images.go:124] failed building to: 
I1024 19:13:58.080738   24854 main.go:141] libmachine: Making call to close driver server
I1024 19:13:58.080748   24854 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:58.081053   24854 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:58.081071   24854 main.go:141] libmachine: (functional-853597) DBG | Closing plugin on server side
I1024 19:13:58.081080   24854 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:13:58.081104   24854 main.go:141] libmachine: Making call to close driver server
I1024 19:13:58.081118   24854 main.go:141] libmachine: (functional-853597) Calling .Close
I1024 19:13:58.081354   24854 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:13:58.081370   24854 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls
2023/10/24 19:13:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.017371858s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-853597
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image load --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image load --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr: (4.937707779s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image load --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image load --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr: (4.622773836s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E1024 19:13:21.664686   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-853597
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image load --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr
E1024 19:13:24.224855   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image load --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr: (10.046998905s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image save gcr.io/google-containers/addon-resizer:functional-853597 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image save gcr.io/google-containers/addon-resizer:functional-853597 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.116997864s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image rm gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.315586798s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-853597 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-853597 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wd9jz" [cbf8c5e4-81dc-4469-8450-cac5507193d1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wd9jz" [cbf8c5e4-81dc-4469-8450-cac5507193d1] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.042096885s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "262.774003ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "60.835204ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-853597
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 image save --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 image save --daemon gcr.io/google-containers/addon-resizer:functional-853597 --alsologtostderr: (1.530360555s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-853597
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "280.318156ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "68.218618ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdany-port4258206223/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698174822094704649" to /tmp/TestFunctionalparallelMountCmdany-port4258206223/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698174822094704649" to /tmp/TestFunctionalparallelMountCmdany-port4258206223/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698174822094704649" to /tmp/TestFunctionalparallelMountCmdany-port4258206223/001/test-1698174822094704649
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.531658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 24 19:13 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 24 19:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 24 19:13 test-1698174822094704649
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh cat /mount-9p/test-1698174822094704649
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-853597 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fb1ee478-b78d-42c5-b9c8-b74bb3b58277] Pending
helpers_test.go:344: "busybox-mount" [fb1ee478-b78d-42c5-b9c8-b74bb3b58277] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fb1ee478-b78d-42c5-b9c8-b74bb3b58277] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fb1ee478-b78d-42c5-b9c8-b74bb3b58277] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.018099314s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-853597 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdany-port4258206223/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 service list: (1.370236489s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-853597 service list -o json: (1.410748187s)
functional_test.go:1493: Took "1.410882944s" to run "out/minikube-linux-amd64 -p functional-853597 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdspecific-port3160424530/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.010814ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdspecific-port3160424530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh "sudo umount -f /mount-9p": exit status 1 (284.068316ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-853597 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdspecific-port3160424530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.249:31220
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.249:31220
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T" /mount1: exit status 1 (308.841194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-853597 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-853597 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-853597 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3935999515/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-853597
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-853597
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-853597
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (103.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-845802 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1024 19:14:41.027153   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-845802 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m43.684747347s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (103.68s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons enable ingress --alsologtostderr -v=5: (11.514409872s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845802 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-553185 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1024 19:19:32.481401   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-553185 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.19362932s)
--- PASS: TestJSONOutput/start/Command (100.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-553185 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-553185 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (23.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-553185 --output=json --user=testUser
E1024 19:20:54.402791   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:21:00.584987   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:00.590245   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:00.600539   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:00.620831   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:00.661110   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:00.741433   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:00.901850   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:01.222433   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:01.863432   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:03.143861   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-553185 --output=json --user=testUser: (23.125233163s)
--- PASS: TestJSONOutput/stop/Command (23.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-731853 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-731853 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.097863ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f533a16-0324-401e-b9c7-1c1c04b0c2a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-731853] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fd8a670-b202-43a2-a31d-63266be8c45c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17485"}}
	{"specversion":"1.0","id":"99865252-c30d-47c6-84cd-3be8d9fa24b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4ed0dc4-3115-4116-b845-9b5e18d0e950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig"}}
	{"specversion":"1.0","id":"f6ca7705-19ed-4b1a-afc9-43c7b7094ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube"}}
	{"specversion":"1.0","id":"5cfbf45b-3ee5-4e64-ad8c-a7ed1b5c4ef6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7bc80ce-1bc8-413b-b12f-69fda0a9e53f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ef5fb07d-67f8-4adb-8ca5-61747d76fbd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-731853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-731853
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (102.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-696374 --driver=kvm2  --container-runtime=crio
E1024 19:21:05.704981   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:10.825957   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:21.066107   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:21:41.546636   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-696374 --driver=kvm2  --container-runtime=crio: (49.179890244s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-699120 --driver=kvm2  --container-runtime=crio
E1024 19:22:22.508092   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-699120 --driver=kvm2  --container-runtime=crio: (50.869710759s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-696374
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-699120
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-699120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-699120
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-699120: (1.009674516s)
helpers_test.go:175: Cleaning up "first-696374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-696374
--- PASS: TestMinikubeProfile (102.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-637861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1024 19:23:10.558580   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-637861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.092264661s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-637861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-637861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-658426 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1024 19:23:19.104781   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:23:38.243024   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-658426 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.474326137s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658426 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658426 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-637861 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658426 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658426 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-658426
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-658426: (1.155281984s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-658426
E1024 19:23:44.428479   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-658426: (24.915905598s)
--- PASS: TestMountStart/serial/RestartStopped (25.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658426 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658426 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-632589 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1024 19:26:00.584361   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-632589 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.52348335s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-632589 -- rollout status deployment/busybox: (2.552937055s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-ddcjz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-wrmmm -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-ddcjz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-wrmmm -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-ddcjz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-632589 -- exec busybox-5bc68d56bd-wrmmm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.41s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-632589 -v 3 --alsologtostderr
E1024 19:26:28.268818   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-632589 -v 3 --alsologtostderr: (44.417052388s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.02s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp testdata/cp-test.txt multinode-632589:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3245783295/001/cp-test_multinode-632589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589:/home/docker/cp-test.txt multinode-632589-m02:/home/docker/cp-test_multinode-632589_multinode-632589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m02 "sudo cat /home/docker/cp-test_multinode-632589_multinode-632589-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589:/home/docker/cp-test.txt multinode-632589-m03:/home/docker/cp-test_multinode-632589_multinode-632589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m03 "sudo cat /home/docker/cp-test_multinode-632589_multinode-632589-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp testdata/cp-test.txt multinode-632589-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3245783295/001/cp-test_multinode-632589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589-m02:/home/docker/cp-test.txt multinode-632589:/home/docker/cp-test_multinode-632589-m02_multinode-632589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589 "sudo cat /home/docker/cp-test_multinode-632589-m02_multinode-632589.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589-m02:/home/docker/cp-test.txt multinode-632589-m03:/home/docker/cp-test_multinode-632589-m02_multinode-632589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m03 "sudo cat /home/docker/cp-test_multinode-632589-m02_multinode-632589-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp testdata/cp-test.txt multinode-632589-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3245783295/001/cp-test_multinode-632589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt multinode-632589:/home/docker/cp-test_multinode-632589-m03_multinode-632589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589 "sudo cat /home/docker/cp-test_multinode-632589-m03_multinode-632589.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 cp multinode-632589-m03:/home/docker/cp-test.txt multinode-632589-m02:/home/docker/cp-test_multinode-632589-m03_multinode-632589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 ssh -n multinode-632589-m02 "sudo cat /home/docker/cp-test_multinode-632589-m03_multinode-632589-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-632589 node stop m03: (2.089318197s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-632589 status: exit status 7 (445.214364ms)

                                                
                                                
-- stdout --
	multinode-632589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-632589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-632589-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-632589 status --alsologtostderr: exit status 7 (462.124861ms)

                                                
                                                
-- stdout --
	multinode-632589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-632589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-632589-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:27:04.842280   32371 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:27:04.842429   32371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:27:04.842438   32371 out.go:309] Setting ErrFile to fd 2...
	I1024 19:27:04.842445   32371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:27:04.842654   32371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:27:04.842841   32371 out.go:303] Setting JSON to false
	I1024 19:27:04.842874   32371 mustload.go:65] Loading cluster: multinode-632589
	I1024 19:27:04.842972   32371 notify.go:220] Checking for updates...
	I1024 19:27:04.843346   32371 config.go:182] Loaded profile config "multinode-632589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:27:04.843362   32371 status.go:255] checking status of multinode-632589 ...
	I1024 19:27:04.843818   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:04.843904   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:04.862804   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I1024 19:27:04.863173   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:04.863679   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:04.863700   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:04.864021   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:04.864231   32371 main.go:141] libmachine: (multinode-632589) Calling .GetState
	I1024 19:27:04.865575   32371 status.go:330] multinode-632589 host status = "Running" (err=<nil>)
	I1024 19:27:04.865588   32371 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:27:04.865853   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:04.865889   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:04.880762   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1024 19:27:04.881107   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:04.881509   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:04.881530   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:04.881829   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:04.882004   32371 main.go:141] libmachine: (multinode-632589) Calling .GetIP
	I1024 19:27:04.884720   32371 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:27:04.885135   32371 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:27:04.885171   32371 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:27:04.885314   32371 host.go:66] Checking if "multinode-632589" exists ...
	I1024 19:27:04.885580   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:04.885616   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:04.899117   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1024 19:27:04.899470   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:04.899886   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:04.899912   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:04.900175   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:04.900350   32371 main.go:141] libmachine: (multinode-632589) Calling .DriverName
	I1024 19:27:04.900499   32371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:27:04.900523   32371 main.go:141] libmachine: (multinode-632589) Calling .GetSSHHostname
	I1024 19:27:04.903274   32371 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:27:04.903696   32371 main.go:141] libmachine: (multinode-632589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:c3:34", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:24:27 +0000 UTC Type:0 Mac:52:54:00:9a:c3:34 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-632589 Clientid:01:52:54:00:9a:c3:34}
	I1024 19:27:04.903731   32371 main.go:141] libmachine: (multinode-632589) DBG | domain multinode-632589 has defined IP address 192.168.39.247 and MAC address 52:54:00:9a:c3:34 in network mk-multinode-632589
	I1024 19:27:04.903854   32371 main.go:141] libmachine: (multinode-632589) Calling .GetSSHPort
	I1024 19:27:04.904017   32371 main.go:141] libmachine: (multinode-632589) Calling .GetSSHKeyPath
	I1024 19:27:04.904141   32371 main.go:141] libmachine: (multinode-632589) Calling .GetSSHUsername
	I1024 19:27:04.904271   32371 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589/id_rsa Username:docker}
	I1024 19:27:05.002708   32371 ssh_runner.go:195] Run: systemctl --version
	I1024 19:27:05.009070   32371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:27:05.025730   32371 kubeconfig.go:92] found "multinode-632589" server: "https://192.168.39.247:8443"
	I1024 19:27:05.025756   32371 api_server.go:166] Checking apiserver status ...
	I1024 19:27:05.025797   32371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:27:05.039315   32371 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1083/cgroup
	I1024 19:27:05.049796   32371 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/pod3765446b9543fe4146506d2b0cf0aafd/crio-48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9"
	I1024 19:27:05.049869   32371 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3765446b9543fe4146506d2b0cf0aafd/crio-48cb7643f21dd7f26baad900a9f112360724b58b3af2af74d81611c399a385e9/freezer.state
	I1024 19:27:05.060617   32371 api_server.go:204] freezer state: "THAWED"
	I1024 19:27:05.060646   32371 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I1024 19:27:05.065370   32371 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I1024 19:27:05.065394   32371 status.go:421] multinode-632589 apiserver status = Running (err=<nil>)
	I1024 19:27:05.065405   32371 status.go:257] multinode-632589 status: &{Name:multinode-632589 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:27:05.065425   32371 status.go:255] checking status of multinode-632589-m02 ...
	I1024 19:27:05.065725   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:05.065769   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:05.080117   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33489
	I1024 19:27:05.080535   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:05.080980   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:05.081000   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:05.081341   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:05.081534   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .GetState
	I1024 19:27:05.083016   32371 status.go:330] multinode-632589-m02 host status = "Running" (err=<nil>)
	I1024 19:27:05.083036   32371 host.go:66] Checking if "multinode-632589-m02" exists ...
	I1024 19:27:05.083295   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:05.083329   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:05.097585   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38001
	I1024 19:27:05.097939   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:05.098377   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:05.098397   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:05.098710   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:05.098867   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .GetIP
	I1024 19:27:05.101666   32371 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:27:05.102149   32371 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:27:05.102183   32371 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:27:05.102297   32371 host.go:66] Checking if "multinode-632589-m02" exists ...
	I1024 19:27:05.102592   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:05.102633   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:05.119012   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I1024 19:27:05.119446   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:05.119994   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:05.120015   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:05.120348   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:05.120550   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .DriverName
	I1024 19:27:05.120756   32371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:27:05.120773   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHHostname
	I1024 19:27:05.123506   32371 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:27:05.123971   32371 main.go:141] libmachine: (multinode-632589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ed:9b", ip: ""} in network mk-multinode-632589: {Iface:virbr1 ExpiryTime:2023-10-24 20:25:36 +0000 UTC Type:0 Mac:52:54:00:ae:ed:9b Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-632589-m02 Clientid:01:52:54:00:ae:ed:9b}
	I1024 19:27:05.124004   32371 main.go:141] libmachine: (multinode-632589-m02) DBG | domain multinode-632589-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:ae:ed:9b in network mk-multinode-632589
	I1024 19:27:05.124127   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHPort
	I1024 19:27:05.124310   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHKeyPath
	I1024 19:27:05.124438   32371 main.go:141] libmachine: (multinode-632589-m02) Calling .GetSSHUsername
	I1024 19:27:05.124572   32371 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9023/.minikube/machines/multinode-632589-m02/id_rsa Username:docker}
	I1024 19:27:05.216978   32371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:27:05.228999   32371 status.go:257] multinode-632589-m02 status: &{Name:multinode-632589-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:27:05.229034   32371 status.go:255] checking status of multinode-632589-m03 ...
	I1024 19:27:05.229409   32371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1024 19:27:05.229470   32371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:27:05.245046   32371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I1024 19:27:05.245449   32371 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:27:05.245973   32371 main.go:141] libmachine: Using API Version  1
	I1024 19:27:05.246000   32371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:27:05.246354   32371 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:27:05.246513   32371 main.go:141] libmachine: (multinode-632589-m03) Calling .GetState
	I1024 19:27:05.248116   32371 status.go:330] multinode-632589-m03 host status = "Stopped" (err=<nil>)
	I1024 19:27:05.248136   32371 status.go:343] host is not running, skipping remaining checks
	I1024 19:27:05.248143   32371 status.go:257] multinode-632589-m03 status: &{Name:multinode-632589-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-632589 node start m03 --alsologtostderr: (27.986319939s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-632589 node delete m03: (1.189351129s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (444.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-632589 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1024 19:43:10.558882   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:43:19.103880   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:46:00.584429   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 19:46:22.153456   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 19:48:10.559089   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:48:19.104453   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-632589 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.660903692s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-632589 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (444.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-632589
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-632589-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-632589-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.207949ms)

                                                
                                                
-- stdout --
	* [multinode-632589-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-632589-m02' is duplicated with machine name 'multinode-632589-m02' in profile 'multinode-632589'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-632589-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-632589-m03 --driver=kvm2  --container-runtime=crio: (48.241933624s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-632589
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-632589: exit status 80 (231.41795ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-632589
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-632589-m03 already exists in multinode-632589-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-632589-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.56s)

                                                
                                    
x
+
TestScheduledStopUnix (116.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-320165 --memory=2048 --driver=kvm2  --container-runtime=crio
E1024 19:53:10.559411   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 19:53:19.103844   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-320165 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.828279364s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-320165 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-320165 -n scheduled-stop-320165
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-320165 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-320165 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-320165 -n scheduled-stop-320165
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-320165
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-320165 --schedule 15s
E1024 19:54:03.630017   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-320165
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-320165: exit status 7 (73.659858ms)

                                                
                                                
-- stdout --
	scheduled-stop-320165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-320165 -n scheduled-stop-320165
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-320165 -n scheduled-stop-320165: exit status 7 (74.421871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-320165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-320165
--- PASS: TestScheduledStopUnix (116.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.418531835s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-164196
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-164196: (2.115235801s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-164196 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-164196 status --format={{.Host}}: exit status 7 (93.140738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.543176321s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-164196 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (120.977869ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-164196] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-164196
	    minikube start -p kubernetes-upgrade-164196 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1641962 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-164196 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-164196 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.051688905s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-164196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-164196
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-164196: (1.117030755s)
--- PASS: TestKubernetesUpgrade (208.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-817737 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-817737 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (97.25203ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-817737] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-817737 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-817737 --driver=kvm2  --container-runtime=crio: (1m40.277341357s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-817737 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-817737 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-817737 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.919645372s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-817737 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-817737 status -o json: exit status 2 (270.867568ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-817737","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-817737
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-817737: (1.076833749s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-817737 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-817737 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.712435711s)
--- PASS: TestNoKubernetes/serial/Start (28.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-817737 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-817737 "sudo systemctl is-active --quiet service kubelet": exit status 1 (236.175793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-817737
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-817737: (1.868432494s)
--- PASS: TestNoKubernetes/serial/Stop (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-817737 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-817737 --driver=kvm2  --container-runtime=crio: (26.053620088s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-817737 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-817737 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.732536ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-784554 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-784554 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (131.556795ms)

                                                
                                                
-- stdout --
	* [false-784554] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:58:07.287844   43038 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:58:07.287968   43038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:58:07.287976   43038 out.go:309] Setting ErrFile to fd 2...
	I1024 19:58:07.287981   43038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:58:07.288174   43038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9023/.minikube/bin
	I1024 19:58:07.288719   43038 out.go:303] Setting JSON to false
	I1024 19:58:07.289648   43038 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5785,"bootTime":1698171702,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:58:07.289708   43038 start.go:138] virtualization: kvm guest
	I1024 19:58:07.292209   43038 out.go:177] * [false-784554] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:58:07.293807   43038 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:58:07.293804   43038 notify.go:220] Checking for updates...
	I1024 19:58:07.295553   43038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:58:07.297398   43038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9023/kubeconfig
	I1024 19:58:07.298731   43038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9023/.minikube
	I1024 19:58:07.300033   43038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:58:07.302038   43038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:58:07.303736   43038 config.go:182] Loaded profile config "force-systemd-env-912715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:58:07.303835   43038 config.go:182] Loaded profile config "kubernetes-upgrade-164196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:58:07.303888   43038 config.go:182] Loaded profile config "stopped-upgrade-145190": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1024 19:58:07.303949   43038 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:58:07.338497   43038 out.go:177] * Using the kvm2 driver based on user configuration
	I1024 19:58:07.340419   43038 start.go:298] selected driver: kvm2
	I1024 19:58:07.340432   43038 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:58:07.340446   43038 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:58:07.342597   43038 out.go:177] 
	W1024 19:58:07.344183   43038 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1024 19:58:07.345969   43038 out.go:177] 

                                                
                                                
** /stderr **
E1024 19:58:10.558937   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-784554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-784554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:57:08 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.41:8443
name: kubernetes-upgrade-164196
contexts:
- context:
cluster: kubernetes-upgrade-164196
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:57:08 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-164196
name: kubernetes-upgrade-164196
current-context: kubernetes-upgrade-164196
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-164196
user:
client-certificate: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/kubernetes-upgrade-164196/client.crt
client-key: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/kubernetes-upgrade-164196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-784554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-784554"

                                                
                                                
----------------------- debugLogs end: false-784554 [took: 3.780094027s] --------------------------------
helpers_test.go:175: Cleaning up "false-784554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-784554
--- PASS: TestNetworkPlugins/group/false (4.08s)

                                                
                                    
x
+
TestPause/serial/Start (112.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-636215 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-636215 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m52.722735856s)
--- PASS: TestPause/serial/Start (112.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (355.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-467375 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1024 20:01:00.584168   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-467375 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (5m55.753543292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (355.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (158.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-014826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-014826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (2m38.406280876s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (158.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-145190
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (152.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-867165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-867165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (2m32.722528585s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (152.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-643126 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-643126 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m41.486223368s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-014826 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a8e5c07-7077-4947-8c31-f3c6da4d5e92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a8e5c07-7077-4947-8c31-f3c6da4d5e92] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.035864802s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-014826 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-867165 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38a424c5-7864-4116-b76f-3cf8ea7f8ce5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [38a424c5-7864-4116-b76f-3cf8ea7f8ce5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.036309619s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-867165 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-014826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-014826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.192042923s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-014826 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-867165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-867165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.160000351s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-867165 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [65a34d3b-218a-456c-8c23-ec8d153cbbc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [65a34d3b-218a-456c-8c23-ec8d153cbbc0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.033164791s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-643126 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-643126 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066116961s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-643126 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (694.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-014826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-014826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (11m33.955083812s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-014826 -n no-preload-014826
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (694.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (578.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-867165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-867165 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (9m38.272658182s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-867165 -n embed-certs-867165
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (578.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-467375 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7987137a-6008-4fe0-979f-4ec1b8c7af65] Pending
helpers_test.go:344: "busybox" [7987137a-6008-4fe0-979f-4ec1b8c7af65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7987137a-6008-4fe0-979f-4ec1b8c7af65] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.036389751s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-467375 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467375 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-467375 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (515.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-643126 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:08:10.558677   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 20:08:19.103924   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-643126 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (8m35.164259108s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643126 -n default-k8s-diff-port-643126
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (515.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (589.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-467375 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1024 20:10:43.630521   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 20:11:00.584319   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
E1024 20:13:10.559105   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
E1024 20:13:19.104773   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
E1024 20:16:00.584169   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-467375 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (9m48.743543139s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467375 -n old-k8s-version-467375
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (589.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-398707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:31:00.585098   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/ingress-addon-legacy-845802/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-398707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m1.706749811s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (115.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m55.221809727s)
--- PASS: TestNetworkPlugins/group/auto/Start (115.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (100.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1024 20:31:51.723766   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:51.729062   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:51.739514   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:51.760230   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:51.801146   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:51.882033   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:52.042266   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:52.363200   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:53.003764   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:54.284329   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
E1024 20:31:56.845386   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m40.674281121s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (100.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-398707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1024 20:32:01.965644   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-398707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.933968165s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-398707 --alsologtostderr -v=3
E1024 20:32:12.206200   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-398707 --alsologtostderr -v=3: (12.465149002s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398707 -n newest-cni-398707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398707 -n newest-cni-398707: exit status 7 (76.800432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-398707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (58.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-398707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:32:32.687099   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-398707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (58.423443337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-398707 -n newest-cni-398707
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (58.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w2qm5" [d39f6c7a-fa1f-4518-bad2-0325ce087f83] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.048131058s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c5hkk" [6ad70431-f4fe-4b5e-9e8a-5d2d89a3e456] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 20:33:10.558354   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/functional-853597/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-c5hkk" [6ad70431-f4fe-4b5e-9e8a-5d2d89a3e456] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.012836775s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ss25v" [43fef504-7837-4424-a7e7-db0107df2e01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 20:33:13.648057   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ss25v" [43fef504-7837-4424-a7e7-db0107df2e01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.020054102s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-398707 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-398707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398707 -n newest-cni-398707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398707 -n newest-cni-398707: exit status 2 (283.820757ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398707 -n newest-cni-398707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398707 -n newest-cni-398707: exit status 2 (269.87135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-398707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-398707 -n newest-cni-398707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-398707 -n newest-cni-398707
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m37.670690052s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-784554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-784554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (97.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m37.455468383s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (97.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (144.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m24.701684623s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (144.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (153.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1024 20:33:46.232459   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.237795   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.248939   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.269181   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.310093   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.390466   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.550817   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:46.870973   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:47.511453   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:48.792289   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:51.352859   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:33:56.474016   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:34:06.715031   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:34:27.195811   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
E1024 20:34:35.568873   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/old-k8s-version-467375/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m33.236011626s)
--- PASS: TestNetworkPlugins/group/flannel/Start (153.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6xt8f" [0e0f7c1c-9f1d-4398-b17a-c1abccd3b478] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.035296101s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f6qk7" [f5c3cb47-0ef5-4c60-851e-28f31708dd14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 20:35:08.156347   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-f6qk7" [f5c3cb47-0ef5-4c60-851e-28f31708dd14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.011533772s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-784554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zptx7" [b21a1008-0ccb-4d0b-9950-de9449481032] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 20:35:20.392644   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:20.397971   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:20.408297   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:20.428616   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:20.471845   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:20.552052   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:20.713020   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:21.033631   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:21.674806   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zptx7" [b21a1008-0ccb-4d0b-9950-de9449481032] Running
E1024 20:35:22.955035   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
E1024 20:35:25.648716   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.018724568s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-784554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1024 20:35:41.010217   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-784554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m2.475786091s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-784554 replace --force -f testdata/netcat-deployment.yaml: (1.306853588s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mjncr" [39637a1c-3aa1-4b45-aada-60f016a59e7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mjncr" [39637a1c-3aa1-4b45-aada-60f016a59e7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.01430195s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xgdsh" [30107bea-ab53-445b-9e30-bc6064feb49a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.026151708s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-784554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fhqcq" [e0271831-f356-4e02-8e7a-242491d22987] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 20:36:22.155485   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/addons-866342/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fhqcq" [e0271831-f356-4e02-8e7a-242491d22987] Running
E1024 20:36:30.076685   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/no-preload-014826/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.015123138s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-784554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-784554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-784554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4f8tf" [ddb6e159-8cb0-4d5c-ae89-7c7483c83fff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 20:36:42.451783   16298 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/default-k8s-diff-port-643126/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-4f8tf" [ddb6e159-8cb0-4d5c-ae89-7c7483c83fff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.015004941s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-784554 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-784554 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.192976071s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-784554 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-784554 exec deployment/netcat -- nslookup kubernetes.default: (10.194725653s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-784554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (36/292)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
246 TestStartStop/group/disable-driver-mounts 0.14
251 TestNetworkPlugins/group/kubenet 4.65
259 TestNetworkPlugins/group/cilium 4.46
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-087071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-087071
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-784554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-784554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:57:08 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.41:8443
name: kubernetes-upgrade-164196
contexts:
- context:
cluster: kubernetes-upgrade-164196
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:57:08 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-164196
name: kubernetes-upgrade-164196
current-context: kubernetes-upgrade-164196
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-164196
user:
client-certificate: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/kubernetes-upgrade-164196/client.crt
client-key: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/kubernetes-upgrade-164196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-784554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-784554"

                                                
                                                
----------------------- debugLogs end: kubenet-784554 [took: 4.466334866s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-784554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-784554
--- SKIP: TestNetworkPlugins/group/kubenet (4.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-784554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-784554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-9023/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:58:13 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.41:8443
name: kubernetes-upgrade-164196
contexts:
- context:
cluster: kubernetes-upgrade-164196
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:58:13 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-164196
name: kubernetes-upgrade-164196
current-context: kubernetes-upgrade-164196
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-164196
user:
client-certificate: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/kubernetes-upgrade-164196/client.crt
client-key: /home/jenkins/minikube-integration/17485-9023/.minikube/profiles/kubernetes-upgrade-164196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-784554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-784554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-784554"

                                                
                                                
----------------------- debugLogs end: cilium-784554 [took: 4.298899196s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-784554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-784554
--- SKIP: TestNetworkPlugins/group/cilium (4.46s)

                                                
                                    
Copied to clipboard